Migrate from Active Directory Integrated Windows Authentication VMware vSphere 7.0

VMware is depreciating Integrated Windows Authentication in vSphere 7.0. The feature will be removed in a later release. Below is from the VMware KB.

Support for IWA continues to be available in vSphere 7.0 and will be phased out in a future release. Although IWA can still be configured, we highly recommend using AD over LDAP or Federated Identity (AD FS).

Deprecation of Integrated Windows Authentication (78506) (vmware.com)

In this post we will be going through changing over to using Active Directory over LDAP. We will also be using LDAPS as this is secured with certificates and is much better from a security side and Microsoft are requiring this on applications that use LDAP.

2020 LDAP channel binding and LDAP signing requirements for Windows (microsoft.com)

If you haven’t configured a certificate on your domain controller yet to allow LDAPS I would configure this first before proceeding with the swap over to Active directory over LDAP identity provider.

If we check the existing AD IWA we can see the warning that the feature is depreciated.

I usually create a new account for each applications LDAP connections just so I keep track of what account is used where.

For LDAP authentication in a Windows domain a standard account with just domain users right should have enough permission as it best to use least privilege for service accounts.

To confirm in an Windows AD domain is setup to use LDAPS we can use the ldp on a devices that has the active directory tools enabled to confirm LDAPS connection.

Open and click connect and add in the server name, set port to 636 and tick SSL.

If the configuration is retuned then LDAPS is working.

Once we have the account created and confirmed that LDAPS is working we can start setting up AD over LDAP in vCenter.

Since we will be using the same domain name as the IWA source we need to remove this first or it will cause error when trying add the LDAPS source.

Logon to vCenter web client > Menu > Administration > single sign on > configuration.

Under Identity sources select the IWA and click remove.

Click ok to confirm removal.

Once the IWA is removed we can now add the AD LDAP connection.

Click Add in the Identity source page and select Active Directory over LDAP

Add in the required details.

Name: Friendly name for the identity source.

Base DN: Is the level at which search in AD will start for user or groups to search all AD just use the top level or select sub OU to limit the searches.

Domain name: FQDN of the domain

Domain alias: this is the NetBIOS / pre windows 2000 domain name

When I select any domain controller I was getting the below.

Cannot configure identity source due to Failed to probe provider connectivity [URI: ldaps://domainl ]; tenantName [vsphere.local], userName [User] Caused by: Can’t contact LDAP server.

To work around this I had to specific my DC manually.

As I have a certificate issue from an internal certificate authority I will be selecting the CA cert for LDAPS as this should trust any cert issued by the CA on my domain controllers.

Click Add to complete the AD over LDAP identity source.

If we check the websso.log under /var/log/vmware/sso on the vCenter appliance, we can see the certificate being verified when we logon with a domain account.

We have now move from IWA to AD over LDAP all existing groups and roles should still work.

Filtering VMware vCenter Server Events Using PowerCLI

Recently we needed to review some changes and remote console events to check what user was accessing a particular VM and what changes where made.

I find searching event in the vCenter web client is a bit slow, I prefer to use Get-VIEvent as it has multiple parameters that can be used to search and can also use regular expression to filter by patterns.

I decided to do a blog post on how to filter events to show the different options that I use regularly to filter events.

First we need to connect to vCenter sever using a computer that has PowerCLI installed

Connect-VIServer vCenterServer

Once connected we can start to use Get-VIEvent, to return event for a specific object we can use -entity parameter and the object name.

In the below example I am getting the last event for my LAB-Win10 VM

Get-VIEvent -Entity ObjectName -Maxsamples 1

We can also filter by entity and user name is we know the user is tied to the event.

Get-VIEvent -Entity VM -Username User

We can filter the events by time range.

Get-VIEvent -Start "11/07/2021 20:48" -Finish "11/07/2021 21:00" | Select-Object EventTypeId,CreatedTime

Another option for filtering is to use where-object and search for a specific event message.

Get-VIEvent -Entity VM | Where-Object {$_.FullFormattedMessage -Like "VM started"}

There are addtional properties like host, ComputeResource, datacenter…. that are not return as readable values unless you format the results.

To format the result we can use select-object and create an array to give a name to the property and select the property value. Below will show the datacenter, cluster and host the event was create on.

Get-VIEvent -Entity object -Username User | Select-Object Message,CreatedTime,UserName,@{N="DataCenter";E={$_.DataCenter.Name}},@{N="Compute";E={$_.ComputeResource.Name}},@{N="Host";E={$_.Host.Name}}

Below is an example showing what the event looks like before and after using the array’s to return the readable values.

Once we the events we want we can either review them in the PowerShell console or use Export-csv to export the results.

Audit VMware vCenter Server Permission Using PowerCLI

As part of our VMware 6.7 to 7.0 Upgrade we wanted to audit the existing vCenter server permission. We have a lot of contractors who come in to do work and users who have had permission assigned but these permission are not always removed.

We wanted to get a report that export each of the permission assigned in vCenter.

I could do this manually but this would take a while and is not that easily repeatable so I decided to create a quick script that will export the required information.

The script will be calling two command (Get-VIPermission to export permission and Get-VIRole to export the assigned privileges) and then formation the results.

The script also has some mandatory variables (one for the vCenter server and one for the export path) and there is some error handling incase there is no connection to vCenter server or the export folder doesn’t exist.

There are three type of object in VMware permissions.

  • Privilege: Allow specific actions (create, delete, manage.. ) or rights to view specific properties
  • Role : A set of privileges assigned to an object to allow assignment
  • Permission: Is either a set of a users or groups that have been assigned to a role

If we run Get-ViPermission on we will see all permission returned.

We can select one specific permission by using -principal and expand using format-list. This gives a bit more information but we are missing the assigned privilege’s.

This image has an empty alt attribute; its file name is image-28.png

This is where we use Get-VIRole as this has a property that shows privileges that have been assigned to the role.

Below is an example of the script running.

.\VMware_Permissions_Audit.ps1 -VCServer lab-vc.thesleepyadmin.local -ReportExport .\

Once completed the csv file should be exported with the vCenter server name.

Below is what the csv export should look like.

Below is an example of the error handling when connection to vCenter.

The full script can be downloaded from the below link to my GitHub.

Scripts/VMware/Permissions_Audit at master · TheSleepyAdmin/Scripts (github.com)

Updating VMware tools on ESXi 7.0 host using VMware Lifecycle Manager

There was recent VMware local privilege escalation vulnerability in VMware tools below 11.2.6 and below. See VMware advisors VMSA-2021-0013 (vmware.com).

The vunerablity has been fixed in VMware tools version 11.3

VMware Tools 11.3.0 Release Notes

We needed to update the version of VMware tools running manually as the tools are not currently included in any other of our standard baselines we apply to our hosts.

I decided to do a to a post on how to update the version of VMware tools using VMware Lifecycle manager baseline as it a little bit different that VMware Update Manager.

First we need to go to Lifecycle Manager, open the vSphere web console > Menu > Lifecycle Manager

In Lifecycle manager the tools should be synced as previously in VMware Update Manager the tools need to be manually uploaded.

To quickest way I find to check the latest tools have been synced is by click on image depot and select components.

We could also check under updates and turn off show only rollup updates. (If the tools required a reboot it would show under impact)

Next we will create a baseline to apply the latest tools.

Go to baselines and select new baseline.

Give the baseline a name and select patch

Untick Automatically update this baseline

Untick show only rollup updates and filter for VMware tools, there will probable be a different VMware tools for 6.x and 7.x so check before adding to the baseline.

Click next and complete the baseline creation.

We can check the current tools status by going to the esxi host > Updates > VMware tools and check status.

We can now apply the baseline and run the check again and it should show as out of date.

The baseline can be applied either directly to the ESXi host or to the cluster we will be applying to the cluster as it save time having to apply to each host indiduvally .

Go to the cluster > Updates > attach and select attached baseline.

Select the VMware tools baseline and attach.

Next run a compliance check on the ESXi host.

Check the baseline status.

Next we will remediate the baseline to apply the latest tools.

If there are no issue with the pre-check click remediate.

Once the remediation is done the tools should show as compliant.

Once applied the VM should now pickup that there is a new tools version available.

The tools can now be applied to the VM either using a script, update on reboot or manually.

Report on users MFA status in Office 365 using PowerShell

During a recent audit we wanted to confirm what users had MFA enabled in Office 365. We use conditional access policy to enforce MFA.

We wanted to check each users to see if they had setup MFA and had a method configured. We also wanted to get information on licensing status and assigned licenses.

The only pre-req for using the script is that the MSOnline Powershell module is installed.

To install the MSOline module open and admin PowerShell windows and run

Install-Module -Name MSOnline

To confirm the module is installed run the below command.

Get-Module -ListAvailable MSOnline
This image has an empty alt attribute; its file name is image-26.png

First we need to connect to MS Online to do this run

Connect-MsolService 

Once connected to check the MFA status I will be using the StrongAuthenticationMethods properties as if MFA is configured for the user there will be a default method set.

For users that haven’t configured MFA no StrongAuthenticationMethods is set.

Below are the 4 methods available for MFA.

OneWaySMS
TwoWayVoiceMobile
PhoneAppOTP
PhoneAppNotification

In the script I only want to return the default method.

There is only one mandatory parameter for the export path where the report will be exported to.

The below is an example of how to run the report.

.\Office365_MFA_Report.ps1 -ExportPath C:\temp

Below is what the output will look like.

The full script can be downloaded from the below link.

Scripts/Office365_MFA_Report.ps1 at master · TheSleepyAdmin/Scripts (github.com)

Weekly Active Directory Audit Report PowerShell

Recently a request came in from our security team to audit recently create, deleted AD object, accounts due to expire (this is for third party users) and modified / created group policy objects so that they would be able to trace the changes happening in Active Directory.

I decided to write a PowerShell script that will export the required information and then send a the csv export to the user that require the information.

This could also be used to import the data to a dashboard by either using the CSV files or if the dashboard can use direct PowerShell script like PowerBI.

First there are some mandatory parameters. Exportpath and domain.

To allow the script to be run without emailing the csv I have left the smtpserver, to and from address as not mandatory parameters.

The script used two different modules

Group Policy:

ActiveDirectory:

To install these go on a Windows server go to add roles and features and select Group policy Management

and under RSAT enabled the Active Directory module.

Once all the features are enable we can run the script.

I have set the default time to last 7 days but if you want to go back further then update the date value.

To run the script so that it just export local without email the reports use the below.

.\WeeklyAD_AuditReport_V1.ps1 -exportPath c:\Temp\AD_Audit\ -domains domian.local

To email the report use the below

.\WeeklyAD_AuditReport_V1.ps1 -SMTPServer mailserver.domain.local -toAddress administrator@domain.local -FromAddress ADreport@domain.local -exportPath c:\Temp\AD_Audit\ -domains domian.local

Once the script completes we can check that the csv files have been created.

If the SMTP server parameter is set, the script will send a email and add the csv as attachments.

Below is what the outputs should look like.

GPO:

Deleted Objects:

Account expire:

The full script can be downloaded from the below link to my GitHub.

Scripts/ActiveDirectory/WeeklyReport at master · TheSleepyAdmin/Scripts (github.com)

The script can then be set to run as a scheduled task to run on a weekly scheduled.

VMware Daily Health Check HTML Report PowerShell

I have been working on a daily check report for our VMware environment so that we don’t have to manually check each morning.

The report uses PowerCli to generate information and then output the results to a HTML file.

The report requires a few that either the old PowerCLI snapin is available or preferably the PowerCLi PowerShell module.

The script can either be run directly by a users with rights to query vCenter or by setting up a scheduled task.

The following prerequisite will be needed for the script to run.

  • PowerShell V4 or V 5
  • PowerCLI 6.0 or later version
  • vCenter 6.0 or later version

There will also need to be a mail server or relay server available for the report to be emailed.

This has been tested on PowerCLI version 6.0 and above. The version on the server I will be running from is 12.3.0 which is the latest release at this time.

The report checks

  • vCenter connection
  • VMware tools check
  • Snapshot older than the specified snapshots days
  • Host Alarms
  • VM Alarms
  • vCenter Alerts over the last 12 hours
  • Datastore under specified % free space

There are mandatory parameter that are required for the script to run and send the report.

  • VCServer = vCenter Server address
  • SMTPServer = Mail server address
  • Toaddress = destination email
  • Fromaddress = sending address
  • Report Export = folder report will be exported to

There are some variables at the start of the script that can be set to customize the report to only show the required snapshots days and datastore % free. In my case I wanted 3 days and below 20% free on datastores.

I have embedded the html CSS format in the script so it can be update to change the color, font size or font type.

Example of how to run the script is below



.\VMwareDailReportv1.ps1 -VCServer vcenter.domain.local  -SMTPServer mail.doamin.local -FromAddress VMwareReport@domain.local -toAddress Administrator@domain.local -ReportExport D:\Scripts\VMware\Daily_Report

Once completed the report should be emailed to the specified to address.

Below is an example of the report export.

The full script can be downloaded from.

Scripts/VMwareDailyReport.ps1 at master · TheSleepyAdmin/Scripts (github.com)

To create a scheduled task to run the report each morning go to scheduled task on the server or client that has PowerCLI installed.

Create a new task

Set the schedule.

Next we need to set PowerShell as the program to start and set the argument to similar to the below, updating the parameters and script location

-ExecutionPolicy Bypass -NoProfile -File D:\Scripts\VMware\Daily_Report\VMwareDailReportv1.ps1 -VCServer vcenter.domain.local -SMTPServer mail.doamin.local -FromAddress VMwareReport@domain.local -toAddress Administrator@domain.local -ReportExport D:\Scripts\VMware\Daily_Report

I don’t change anything on conditions tab and only update that stop task if running longer than an hour in the settings tab.

Once completed run the task to confirm all is working.

I will probable added to the script but this is just the initial version and thought it might be helpful to anyone who want to try automate some of there manual checks.

Windows 10 20H2 Feature Update Error 0XC190012E

During our recent Windows 10 feature update deployment from 1809 to 20H2, we ran in to an issue on some clients where they reported back error 0XC190012E to ConfigMgr.

The error code its self is just a generic code and I couldn’t find it in either

Windows update error code list

Windows Update error code list by component – Windows Deployment | Microsoft Docs

or the Windows 10 upgrade errors doc

Get help with Windows 10 upgrade and installation errors (microsoft.com)

We first checked the temporary location that feature update deploys to c:\$Windows.~BT, to check if there was any issue in the compatibility xml file or in the setup logs under sources\panther, but there was no folders other than sources.

Since there was no files I though this might be a space issue so ran some remote WMI commands to check the disk space available. I used the below PowerShell using Get-WMIObject.

Get-WmiObject Win32_logicaldisk -ComputerName RemoteComputerName | Select-Object @{Name="Drive";E={$_.DeviceID}},
@{Name="Size(GB)";E={[math]::Round($_.size/1gb)}},
@{Name="Free Space(GB)";E={[math]::Round($_.freespace/1gb)}},
@{Name="%Free Space";E={"{0:N2}" -f (($_.freespace/$_.size)*100)}}

In this case it wasn’t disk space a there was over 100GB free.

There wasn’t much online about the error other than the usually run scf /scannow, diskcheck and run the update troubleshooter but none of these worked.

Then found this post on the Microsoft forms that pointed to an issue with the setupconfig.ini

Generic 0xc190012e trying to upgrade Windows 10 1909 to 20H2 – Microsoft Community

C:\Users\Default\AppData\Local\Microsoft\Windows\WSUS

We aren’t using any custom setting in the setupconfig.ini so there was no issue for me to remove this.

Once I removed the file and kicked off the feature update again it now completes without issue.

I don’t know how the setupconfig.ini file was created on only a few devices but it was a quick fix once we found the issue was the config file.

Install and Configure vRealize Operations Manager 8.2 Part 8 Configure Windows Server Monitoring

In the previous seven post’s we went through installing and configuring the vROps virtual appliance, connecting to vCenter server, configuring Window Active directory as an identity source, create custom alerts and notifications, creating dashboards, upgrading the appliance to the latest version and requesting / configuring a custom SSL cert.

Part 1: Install and Configure vRealize Operations Manager 8.2 Part 1 – TheSleepyAdmins

Part 2: Install and Configure vRealize Operations Manager 8.2 Part 2 Connect to vCenter – TheSleepyAdmins

Part 3: Install and Configure vRealize Operations Manager 8.2 Part 3 AD Authentication – TheSleepyAdmins

Part 4: Install and Configure vRealize Operations Manager 8.2 Part 4 Create Alerts and Notifications – TheSleepyAdmins

Part 5: Install and Configure vRealize Operations Manager 8.2 Part 5 Create a Dashboard – TheSleepyAdmins

Part 6: Install and Configure vRealize Operations Manager 8.2 Part 6 Upgrading vROps – TheSleepyAdmins

Part 7: Install and Configure vRealize Operations Manager 8.2 Part 7 Configure Custom Certificate – TheSleepyAdmins

In this post we will be going through installing the Windows agent and configuring the management pack to alert on Windows server OS level alerts like performance, services and application. This can be useful for monitoring physical servers running Windows.

Below is a link to the VMware document on vROps agent deployment I used for reference.

End Point Operations Management Agent Installation and Deployment (vmware.com)

Below is the list of support Operating system for vROps agent.

Operating SystemProcessor ArchitectureJVM
RedHat Enterprise Linux (RHEL) 5.x, 6.x, 7.xx86_64, x86_32Oracle Java SE8
CentOS 5.x, 6.x, 7.xx86_64, x86_32Oracle Java SE8
SUSE Enterprise Linux (SLES) 11.x, 12.xx86_64Oracle Java SE8
Windows 2008 Server, 2008 Server R2x86_64, x86_32Oracle Java SE8
Windows 2012 Server, 2012 Server R2x86_64Oracle Java SE8
Windows Server 2016x86_64Oracle Java SE8
Solaris 10, 11x86_64, SPARCOracle Java SE7
AIX 6.1, 7.1Power PCIBM Java SE7
VMware Photon Linux 1. 0x86_64Open JDK 1.8.0_72-BLFS
Oracle Linux versions 5, 6, 7x86_64, x86_32Open JDK Runtime Environment 1.7

First we need to enabled the management pack for Operating Systems / Remote Services Monitoring.

After the management pack is enabled we need to download the agent, the 8.2 version works for both 8.2 and 8.3 and is available to download on the same page as the vROps appliance.

Download vRealize Operations – My VMware

Once we have the agent, we can deploy to the servers that need to be monitored.

Copy the file to the server and run installer.

Add in the vROps server when prompted to

Next the installer will look for the thumbprint for the cert that is used for vROps. Logon to https://vrops/admin and click on the cert icon on the top right to view the current cert details.

Enter the user name and password that will be used to connect to vROps.

Set the install location the default is to install in c:\ep-agent this can be change if required.

The agent should now start to install.

We can run ep-agent.bat query from the install folder ep-agent\bin to confirm the agent has installed correctly.

Once completed we can check vROps to confirm the agent is reporting back, to view the agent in vROps logon to the web client > Administration > End Point Operations.

To view details for the server go to Environment > Operating Systems > Operating System World > Windows and select the server to view.

Once the server is added we can now monitor disk, CPU, memory and other metrics.

We can also monitor services.

To add a service to be monitored,

Go to server and click on action > monitor os object > monitor windows service

Give the monitor a name, select the object type and add in the service name (this needs to be the actual name and not the display name)

This image has an empty alt attribute; its file name is image-22.png

Set the collection interval. Click ok to to create the monitor.

Click on Environment and we can view the service monitor we just added.

If we stop the service the next time the collection runs the service should show a critical alert.

We can add addtional metrics if needed. In this example we might want to see the logical disk space % free.

First we either need to modify the existing policy or create a new policy.

In this example we will be adding a new policy and inherting from the default policy.

Go to Policies and click add, give the policy a name and select where it will be inherit from. Then click create policy.

Go to the policies and click on the policy we just created and go to edit policy.

We will be adding a metric so we will select metrics and properties and enable the required metrics.

% free is under EP Ops Adapter > Windows >Fileserver Logical Disk > Utilization and % Free space (%).

Set the policy state to enabled.

Next we can apply the policy to either the object or if there are a lot of device it would be easier to create and apply to a custom group.

Now we can go to the server and confirm the policy is applied.

After a few minutes we can check the server object we can see the new metric and the data start to be shown.

Now that we have the metric showing next we can create an alert.

First we will need to create a symptom definition. Go to symptom definitions and click add.

Select the metric that will be used and give the symptom a name and set the threshold.

We can search to for the symptom to confirm it exist.

Next we need to create the alert. Go to alert definitions and click add.

Give the Alert a name and select Windows as the base object type.

Next we need to add the symptom we created.

Add a recommendation if any are applicable or create a recommendation (this is not required but can be usefully)

We need to add to a policy in this case it’s the Windows_Server_Agent and create a notification if this is required.

This image has an empty alt attribute; its file name is image-48.png

We can search for the alert to confirm it has been created and to view the details.

Now when the server goes below 10% free disk space the server will alert.

Below is what the email notification will look like, we have configured email notification in a previous post so we wont go back over it here.

There are many metrics and alerts that can be configure this is just an example of one type. We can also create multiple alerts so that we get warning alerts at maybe 20% before getting a critical alert.

Install and Configure vRealize Operations Manager 8.2 Part 7 Configure Custom Certificate

In the previous six post’s we went through installing and configuring the vROps virtual appliance, connecting to vCenter server, configuring Window Active directory as an identity source, create custom alerts and notifications, creating dashboards and upgrading the appliance to the latest version.

Part 1: Install and Configure vRealize Operations Manager 8.2 Part 1 – TheSleepyAdmins

Part 2: Install and Configure vRealize Operations Manager 8.2 Part 2 Connect to vCenter – TheSleepyAdmins

Part 3: Install and Configure vRealize Operations Manager 8.2 Part 3 AD Authentication – TheSleepyAdmins

Part 4: Install and Configure vRealize Operations Manager 8.2 Part 4 Create Alerts and Notifications – TheSleepyAdmins

Part 5: Install and Configure vRealize Operations Manager 8.2 Part 5 Create a Dashboard – TheSleepyAdmins

Part 6: Install and Configure vRealize Operations Manager 8.2 Part 6 Upgrading vROps – TheSleepyAdmins

In this post we will be going through requesting and applying a custom certificate. Configuring a custom cert is good practice from security standpoint and also will stop the security warning when access the vRealize web client.

Adding a certificate requires that there is a internal certificate authority that can be used to issue the certificate or we could use a public CA but there would be a cost to that, in this example we will be using a Windows Server CA.

I used the below VMware kb as reference when creating the cert.

Enabling TLS on Localhost Connections (vmware.com)

Configure a Custom Certificate (vmware.com)

First step is to connect to the vROps appliance using ssh connection and to generate the key file and cert request that will be used to generate the cert.

To enabled ssh go to the admin page and enable the ssh status.

If you have not updated the root password on the appliance already then this require to connect by ssh. To do this open a VM console for the appliance and go to login. The default root password is blank so just hit enter and it will prompt for a new password to be set.

Once the above has been completed, ssh to the vrops server I use putty but any ssh client will work.

After connecting I usually create a folder to keep the key file and cert request to they are simpler to find later if I need them again.

Next we need to generate a key file

openssl genrsa -out key_filename.key 2048

Next run the below command to create the certificate request

openssl req -new -key key_filename.key -out certificate_request.csr

Enter in the details for the cert. These can also be pre creating using a .config file but I just typed them in to the ssh console.

There should now be a key file and cert request in the folder.

Copy the .csr file as this will be used to generate the cert from the internal CA.

To generate the certificate logon on to the Microsoft CA web enrollment page.

Click submit and advanced certificate request.

Click submit a certificate request

Open the .csr file in a text editor and copy the content to certificate request box and select the certificate template to be used.

Click submit and the certificate should be generated. The cert needs to be downloaded as base 64.

Save the cert. The root CA cert also needs to be downloaded

Once all the cert files and key file are created, they now need to be combined to a .PEM format as that is the required format for vrops.

To combine the cert using Windows using the type command. The order the of the cert needs to be server cert, then key file, intermediate cert (if there are any in my case I only have the root cert) , root cert and then the PEM output.

type server_cert.cer key_filename.key cacerts.cer > vrops.pem

The .PEM file should now be created and is ready to be applied to vROps.

The last step is to apply the certificate, logon to the vROps admin page and go to the certificate icon in the top right.

Click Install new certificate.

Click browse and select the pem file we created. If there are no issue with verifying the pem file it should show as ready to install.

Click install to complete. The page should now reload and when we check the cert it should now be using the custom cert.

In the next post we will go through installing the Windows vROps agent and configuring the Windows management pack.