Export Remote Shares and Folder permissions using PowerShell

We have recently been looking to audit some Windows servers shares and permissions. I have previously used a script to export folder permissions, so some of this script will be from that previous script. The main difference is that we will be using WMI query to get the list of shares and a looping through specified servers.

To get the list of shares we will use the Win32_Share WMI class and filtered out the default shares.

Get-WmiObject -ComputerName $Server -Class win32_share -Filter "Description != 'Remote Admin' and Description != 'Default share' and Description != 'Remote IPC' and Description != 'Printer Drivers'" | Select-Object Name -ExpandProperty Name

The full script that will be used is located on my Github repository, see link below.

Scripts/Get-SharesAndPermissions.ps1 at master · TheSleepyAdmin/Scripts (github.com)

To run the script use the below and update the exportpath and servers. To add multiple server just a comma between server names.

.\Get-SharesAndPermissions.ps1 -ExportPath D:\Scripts\Folder_Permissions\Export -Servers Server1, Server2

Once the script has completed the result will be export to a csv in the exported folder path.

MECM Baseline: Check for GRUB vulnerability Windows 10

We needed to check for the GRUB vulnerability on our Windows 10 devices.

See advisory below:

https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/ADV200011

We have a few thousand devices to check, so checking manually was going to be a issue, we decided to use MECM baseline to run a script to check for devices that had the issue.

Microsoft give the below command in the advisory to check if the issue exist

[System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFI db).bytes) -match ‘Microsoft Corporation UEFI CA 2011’

If this command returns a true value the device is vulnerable.

To use the command in a baseline we used the try / catch in PowerShell to get a compliance response as the above command is a terminating error and wont return a result other wise.

try{
$GRUBCheck = [System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFI db).bytes) -match ‘Microsoft Corporation UEFI CA 2011’
if ($GRUBCheck -eq $true){
$complinant = ‘False’
}
else
{
$complinant = ‘True’
}
}
catch {
}

$complinant

This slideshow requires JavaScript.

Once we had the script we needed to create the configuration item and baseline.

To create the configuration item open MECM console > Assets and Compliance > Overview > Compliance settings > Configuration item

Create Configuration ItemGRUB3Select Windows Desktops and Servers (custom)GRUB4Select Windows 10 as the version of Windows that will be assessed. GRUB5Add a new settings item. GRUB6Give the item a name

Setting type to script and data type to string. Click Add script. GRUB7Put in the script above.grub8Next add in a compliance rule.GRUB10

Give the compliance rule a name, select the settings item we create earlier, set the value to True so that any devices that doesn’t have the vulnerability will return as compliant. GRUB11Once all the settings and compliance rules are configure following the wizard to complete GRUB12Next we need to create the configuration baseline

Assets and Compliance > Overview > Compliance settings > Configuration item GRUB13Give the compliance baseline a name

Click add and select configuration item.GRUB14Add the configuration item created earlier. GRUB15Click ok to complete the baseline GRUB16Once the baseline is configured, deploy the baseline to the required device collection GRUB17You can either wait for the next time the client does a machine policy retrieval or run the action manually from the client.  Once the client get the updated policy the baseline should show under configurations.GRUB18

Once evaluated we can check the deployment in MECM to find device that are compliant or non-compliant.GRUB19

VMware 6.7 PSC Decommission: Failed to get the PSC thumbprint

As part of our recent VMware 6.7 upgrade we where migrating from external PSC to embedded PSC.

All went fine until we tried to decommission the old PSC’s, when trying to view the thumbprint we got the below error:

Failed to get the PSC thumbprint. Ensure PSC port is correct.

PSCWith out getting the thumbprint you can continue.

We tested accessing the PSC using web browser,

using openssl from the vCenter appliance

openssl s_client -connect PSC01.domain.local:443

We also tested access using telnet to confirm if port 443 was open

curl -v telnet://PSC01.domain.local:443

All test came back fine and there was no ports being blocked.

In the end the issue was cause by using the short name for the vCenter servers https://VCSA/UI

When we changed to the full FQDN, https://VCSA.domain.local/UI this fixed the issue and we where able to complete the decommission. The issue looks to be related to the SAN name on the cert only having the FQDN.

PSC2

VMware ESXi 6.7 Upgrade: Missing dependencies VIBs Error

Recently we have been upgrading some VMware host from ESXi 6.0 to ESXi 6.7, We where applying the image using VMware update manager and a HPE custom ESXi image.

When applying the image we where getting incompatible warring and where not able to apply the image to upgrade ESXi on some hosts.

The issues was related to VIBS but they where not showing in the html 5 clientvibs1

To find the missing VIBS we ended up having to mount the ISO through HPE ILO and try a manual upgrade which did show the conflicting VIBS. vibs2

In our case the VIBS causing issue was the below.

Mellanox_bootbank_net-mlx4-core_1.9.9.8-10EM.510.0.0.799733
Mellanox_bootbank_net-mlx4-en_1.9.9.0-10EM.510.0.0.799733
Emulex_bootbank_scsi-lpfc820_10.5.55.0-10EM.500.0.0.472560
Mellanox_bootbank_net-mst_2.0.0.0-10EM.500.0.0.472560

The issue seem to be related to older hosts that where previously upgraded from ESXi 5.5.

Next we needed to find out if the VIBS where in use by either the storage or network adapters,

below is the VMware KB that explain how to do this.

https://kb.vmware.com/s/article/1027206

To get the list of storage and network adapter use the esxcli commands

esxcli storage core adapter list

esxcli network nic list

To get check the VIBS version we can use

esxcli software vib list | grep Mel

esxcli software vib list | grep scsi-lpfc820

vibs3

Once we know the version numbers of the VIBS, we just need to confirm they are not used and if not used remove them.

If they where in use we would need to look at creating a custom image or wipe and reload the ESXi host.

First command we used was vmkload_mod to view the storage driver version, we use esxcli to view the version for the NIC’s

vibs4

Once we confirm that none of the VIBS are required the last step is to remove each one. Below is the KB from VMware on removing VIBS.

https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.upgrade.doc/GUID-7FFEBD91-5D82-4E32-93AB-F10D8BFFECAA.html

There might be some VIBS that have dependency’s on others in our case the net-mlx4-core needed to be removed after the net-mlx4-en as it was dependent on it.

To remove we use esxcli software vib remove -n ‘vibname’

vibs5

Reboot the ESXi host if required,

After the reboot, scan the host again from updates tab in VMware vSphere web client and it should now show as non-complaint and not incompatible. vibs6

The host should now upgrade as normal.

Migrating User Data Using USMT: MECM OSD Deployment

We have been doing migration’s from some old Windows 8.1 devices to Windows 10, We couldn’t do a direct upgrade as the devices where going from 32 to 64bit and needed to reformat the disk to UEFI.

We needed to migrate the users data as to limit the manual work for each device so decided to use User State Migration Tool (USMT).

It’s been a few years since I had to use USMT so thought it would be good to do a post on using USMT.

First I went through the Microsoft USMT documentation.

https://docs.microsoft.com/en-us/mem/configmgr/osd/deploy-use/refresh-an-existing-computer-with-a-new-version-of-windows

https://docs.microsoft.com/en-us/windows/deployment/usmt/usmt-technical-reference

https://docs.microsoft.com/en-us/windows/deployment/usmt/understanding-migration-xml-files

We are using MECM 2002 and have installed ADK 1903 which has USMT version 10.

Since the disk is going to be wiped and reformatted we wont be able to save the state to a local partition so we will need to use the state migration point to save content too.

First step is to install and configure state migration point in MECM.

Open the MECM console and go to Administration > site configuration > server and site system roles and select or add the server that will be used as the state migration point

Set the max free space in MB, GB or percentage. we also need to set the boundary group that this state migration point will be associated with.

This slideshow requires JavaScript.

Once the role has been installed we can start to create the task sequence to wipe and reload the device.

We won’t go through creating the TS from scratch in this post.

First step is to check is the capture state task. USMT6By default USMT will use the MigApp.xml and MigDocs.xml to set what will be copied.

To view the xml files you can go to the Windows ADK install location the default location is:

C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\User State Migration Tool\amd64

USMT8

USMT16We will be using the default xml but you can create custom xml’s to customize what files and setting need to be copied and restored.

To set the custom xml select customize how user profiles are captured and click on the files button to add in the xml files. USMT7Once the TS is setup we just need to deploy to a collection so that devices can be added and pick up the wipe and reload TS. USMT9Once deployed either wait for the TS to be picked up or run the client action machine policy and app deployment evaluation. USMT10USMT11We can check the smsts.log file to view what is happening we can see below that the files are being downloaded for the USMT package. USMT12Once the scanstate kicks off you can see the xml files that are used in the log. USMT13USMT14We can check the USMT folder on the state migration point to see if data is being copied. USMT15Once the image is applied, we can check the restore state task in the smsts log. The command will call the loadstate and cause use the default or custom xml files. USMT19USMT20USMT18When the TS has fully finished we can logon to Windows we should see that the devices is running Windows 10 and also has the file and settings migrated.USMT17

 

 

 

 

Export GPO assignments using PowerShell

Recently we wanted to do a review of all our Active Directory Group policy objects (GPO’s), we wanted to see what GPO’s where not assigned or what OU they where assigned, so that we could try to consolidate or remove unused GPO’s.

There was a couple of hundred in each domain, so I didn’t want to have to check each one manually.

There is a PowerShell command that you can run to list all GPO’s but it doesn’t show assignments

GPO1

To get more information on the GPO we can run the command Get-GPOReport which let’s you create either a HTML or XML report.

In this case I want to use an XML as I want to pull information from the xml report, the only issue is getting data directly from a XML report is a bit difficult.

GPO2To read an XML report in PowerShell you can use the typecast to XML by using [xml] in front of the variable which should make querying the content easier. GPO3The only part of the XML that I really want currently is the LinksTo, as this shows where the GPO is assigned. GPO4Once I had all this information I was then able to create the full script. I will put the script up on GitHub since it easier for people to copy the script file.

https://github.com/TheSleepyAdmin/Scripts/tree/master/ActiveDirectory/GPO

Below is the script running GPO5This is what the export will look like.GPO6

 

 

 

 

 

MECM 2002 Cloud Management Gateway Configuration

I have been looking at setting up MECM cloud management gateway (CMG) for a while but haven’t been able to, due to the need for PKI or Azure AD joined.

With the recent release of MECM 2002 this has added a new feature,  that allows token based authentication. I decided to do a test deployment in my lab to see how this would work before deploying to our production environment.

With a CMG we can manage clients that don’t regularly connect back to the cooperate network which has become more of a priority recently.

There is a cost for running the VM’s in Azure that will be used as the CMG and for outbound data transfers. Johan Arwidmark has done a good real world cost estimates for a CMG.

https://deploymentresearch.com/real-world-costs-for-using-a-cloud-management-gateway-cmg-with-configmgr/

First step should be to have a read of the docs for planning a CMG

https://docs.microsoft.com/en-us/mem/configmgr/core/clients/manage/cmg/plan-cloud-management-gateway

To use token base authentication require MECM 2002, so that is a pre-requisites for deploying a CMG in this way.

https://docs.microsoft.com/en-us/mem/configmgr/core/clients/deploy/deploy-clients-cmg-token

There will be some required permission in Azure also for deploying the CMG.

  • Account that has Global Admin and Subscription Owner roles.
  • Content filter rules to allow outbound access (If there is a proxy or Firewall filtering traffic)

There will also be a requirement for a cert to be applied to the CMG, it is  recommend to have a third party cert as it should automatically trusted by clients.

From looking through the documents below are the required URL https://docs.microsoft.com/en-us/mem/configmgr/core/plan-design/network/internet-endpoints#cmg-connection-point

*.cloudapp.net

*.blob.core.windows.net

login.microsoftonline.com

Once all pre-requisites have been confirmed we can start to configure the CMG. We will need to pick a unique cloud service name as this will be required later, easiest way to check if the name you have select is unique is to logon to Azure go to cloud services (classic)

CloudService

Once the cloud service has been checked,  then we need to configure the Azure services.

Logon to the MECM console and go to Administration > Cloud services > Azure services.

Click configure Azure servicesCMG1

Give the service a name and select cloud management CMG2

We need to configure a server and client applications CMG5Give the applications a name and sign in to Azure using a account with the required permission CMG3CMG4I just left user discovery enabled. CMG6Click next to finish configuring the Azure services. CMG7Once finished the Azure service should now be showing CMG8The Azure AD tenant should also show with both application we just created. CMG9

 

Once this has all been configured we can now start to setup the CMG.

To start the configuration go to Administration > Cloud services > Cloud Management Gateway

Click Create Cloud Management GatewayCMG10

Select AzurePublicCloud and sign in. (If the subscription ID doesn’t show it might be the account you are using is not an owner of the subscription.)CMG11

Now we need to configure the cloud services, this is where we will use the name we checked earlier.

  • Select the cert file, this needs to be a PFX with the private key (I am using one created on my internal CA but in production I will be using a third party CA like digicert or godaddy)
  • The cert name will be automatically set the deployment services name so this is why we should confirm the name before hand so we can generate the cert with the same name.
  • Select the region the CMG will be configured in
  • Either select a existing or create a new resource group (I chose new one to keep the CMG separate from my other Azure resources)
  • Select the required amount of VM’s this can go up to 16 (for high availability it recommend to configure 2 VM’s at least)
  • Tick the required security authentication I just ticked Enforce TLS 1.2
  • I also ticked using CMG and cloud distribution point

CMG12I left the alerting as default CMG13Just follow the wizard to complete. CMG15

Once completed the CMG should show as provisioning started 

This slideshow requires JavaScript.

We can also logon to Azure to verify the cloud service has been created. CMG18

After the CMG has been configured we then need to install the Cloud management connection point to connect MECM to the CMG.

Go to Administration > site configuration > servers and site system roles, Add the Cloud management gateway connection point to the primary site server in MECM

This slideshow requires JavaScript.

After the role has been configured we need to configure a few steps on the site server, management point and software update point (if installed and configured)

Open the management point properties and tick the allow configuration manager cloud management gateway traffic. (If the tick box for is greyed out there is an additional step required.) 

Go to Administration > site configuration > sites, then configure the primary site to use configuration manager generated certificate in communication security. Once this is done go back to the management point and the tick box should now be available.

CMG27

CMG23CMG24To allow the software update point to communicate with the CMG, tick the allow configuration manager cloud management gateway traffic. CMG26After this has been configured the clients should now pick up a the CMG as a internet based management point in the network tab of client agent properties. CMG25

Once the client moves off the internal network and does a location lookup we should see that the connection type will change to internet from internal. CMG29

We can also check the location services log to see if the CMG is being picked up. CMG28

Connect Windows Admin Center to Azure

In this post we will be going through connecting Windows Admin Center to Azure to allow management of Azure VM’s. To install WAC see previous post.

The Azure integration allows the management of Azure and on-prem servers from a single console.

First step is to register WAC with Azure, Open the WAC admin console and go to settings tab. AZ1

Go to the Azure in the  gateway settingsAZ2Copy the code and click on the enter code hyperlink and enter the codeAZ3AZ4

Sign-in using an admin account on the Azure tenant. AZ5AZ6

Now go back to WAC and click connect to finish the registration AZ7

Once WAC is registered it require admin application permission to be granted to the application registration in Azure AZ8

Now that the registration is completed we can now add Azure VM’s to WAC go to add and select Azure VMAZ9

Select the subscription (if there are multiple subscription in your tenant),  resource group  and VM that will be added. AZ10

Once the Azure VM is added, to allow management there will need to be management ports opened to allow a connection between WAC and the Azure VM. If you are using a site to site VPN you can just allow the ports over the VPN connection.

I have a public IP associated with my VM and I will be modifying my network security group to allow the ports from my public IP.

I wont be going through configuring an NSG as this was covered in a previous post. AZ15

On the VM itself you need to enable winrm and allow port 5985 through the windows firewall if enabled. This can be done by running the two command below from an admin PowerShell session.

winrm quickconfig
Set-NetFirewallRule -Name WINRM-HTTP-In-TCP-PUBLIC -RemoteAddress Any

Once the NSG is configured we should then be able to connect to the VM. AZ12

Below shows the overview of the VMAZ14We can also now connect to the VM using integrated RDP console in WACAZ13

WAC also allows us to manage services, scheduled tasks, backups, check event logs and other admin task, along with connecting using remote PowerShell directly from WAC.AZ16

 

 

Export folder permission using PowerShell

Recently we where moving folder and shares from one server to another. We need to confirm that the folder and permissions were the same on both the old and new share.

To do this I used PowerShell to export the pre and post move permissions and compare the results.

The two commands to get most of the information will be Get-ChildItem and Get-Acl.  The only part of the scripts that will need to be changed is the the export file name to give it a custom name.

Below is the link to full script I will be using.

https://github.com/TheSleepyAdmin/Scripts/tree/master/General/Folder%20Permission

To run the script there will be two mandatory parameters and the command should look like the below. (If you want to look up all subfolders also then just change line 14 and add -Recurse to Get-ChildItem command)

.\Get-FolderPermissions.ps1 -FolderPath \\lab-host01\sources -ExportPath D:\Scripts\Folder_Permissions\Export

I ran the script and changed the exported csv name to pre and post to be used to compare.
Per1
Below is what the export should look like.
Per2
Per3
Once the pre and post export are done we can use compare-object to find any differences.
Just need to update the import-csv paths, I was moving to a share that would have the same FQDN so if that is not the case you can removed the FolderPath from the compare-Object property otherwise all result will not match.
Below is the link to the script I used.
Per4Per5
Below is the export results showing the difference between the pre and post move Per6

MECM check for all updates that are required but not deploy SQL Query

Recently we had an issues where some software updates where missed during regular patching cycle, I wanted to have an automated report to list all updates that where required on more than one system but no deployed.

First I checked the Microsoft SQL views for software updates

https://docs.microsoft.com/en-us/configmgr/develop/core/understand/sqlviews/software-updates-views-configuration-manager

The three tables I used where,

V_UpdateComplianceStatus: This table is used to the get status code for updates. Status of 2 means the update is required.

v_UpdateInfo: This table is used to get information on each update

https://docs.microsoft.com/en-us/configmgr/develop/core/understand/sqlviews/software-updates-views-configuration-manager#v_updateinfo

v_Update_ComplianceSummary: This table is used to get compliance of each update to see how many devices are missing the updates.

https://docs.microsoft.com/en-us/configmgr/develop/core/understand/sqlviews/software-updates-views-configuration-manager#v_update_compliancesummary

select distinct
UI.DatePosted as ‘Release Date’,
UI.articleid as ‘ArticalID’,
UI.Title,
Update_Required=(case when UCS.Status=2 then ‘Yes’ end),
CS.NumMissing,
Updates_Deployed=(case when UI.IsDeployed=0 then ‘NotDeployed’ end),
UI.InfoURL as InformationURL

from V_UpdateComplianceStatus UCS
join v_UpdateInfo UI on UI.CI_ID=UCS.CI_ID
join v_Update_ComplianceSummary CS on CS.CI_ID=UCS.CI_ID
WHERE UCS.Status=2 and UI.IsDeployed=0
order by NumMissing desc

Below is what the query return looks like in SQL Management Studio.missing1To confirm the query is working we can create a  search filter for all software updates to make sure they match.missing2Last step was to create a weekly email report using SQL Server Reporting Services (SSRS).

To create a new report (MECM reporting point role will need to be installed for this) go to Monitoring > Reporting > Reports.missing3I created a custom Reports folder to keep any report I created.

Right click on reports and click create reportmissing4

Give the report a name and a folder path the report will be stored. missing5Follow the report wizard to finish the report creation. missing6missing7Once the report wizard completed the report builder should load. missing8First create a new dataset. Right click on Datasets > Add Datasetsmissing9

Give the Dataset a name, Use a dataset embedded in my report, Data source and select the MECM data source and query type as text. Last part is to copy the query to the query field.missing10Once that click ok and the dataset should show under datasets. missing11

Next we create a table for the report view. missing12Select the dataset we create.missing13I just wanted to use all values so drag all row to the value box.missing14Next just go through the rest of the wizard. Once completed you should see something similar to the below. I  added a text box at the top for a tile and added in some grey filling on the table. missing16Now save the report and we can run from the MECM console to see what the report will look like. missing17Last step is to create subscription to send a the weekly report missing18Select report delivered by Email and put in the required detail and report to be included.(Email settings need to be configured in SSRS before you can select email as a deliver option)missing19Set the schedule that is required and complete the wizard. missing20The report should now be sent out weekly as a csv file.