Connecting to Microsoft GraphAPI Using PowerShell

Recently I have been looking to use Microsoft Graph to query specific information for Microsoft 365 services.

Microsoft Graph is an development tool that connects to multiple Microsoft 365 services to allow querying data and automate tasks.

There are a few steps required to start using Graph which involves creating a app registration on Azure to issue authentication tokens and API permission to view data.

Use the Microsoft Graph API – Microsoft Graph | Microsoft Docs

I also used a blog post by adamtheautomator as this was very good at explaining the process and goes more in depth.

Using the Microsoft Graph API with PowerShell (adamtheautomator.com)

In this post we will be going through configuring the app registration and query some data from Azure AD.

First step is to logon to the Azure portal > Azure AD > App registration and click on New registration.

Give the app a name and specify the support account type in this case we only want account from our tenant.

Once completed, we should now see the app has been created.

Next step we need to configure the API permissions, depending on the type of access required we will use either delegated or application permission as some data can only be access by either permission types.

below is a extract from the Microsft Docs on permission types

Microsoft identity platform developer glossary | Microsoft Docs

permissions

client application gains access to a resource server by declaring permission requests. Two types are available:

  • “Delegated” permissions, which specify scope-based access using delegated authorization from the signed-in resource owner, are presented to the resource at run-time as “scp” claims in the client’s access token.
  • “Application” permissions, which specify role-based access using the client application’s credentials/identity, are presented to the resource at run-time as “roles” claims in the client’s access token.

To assign permission go to the app registration we created earlier and go to API permissions > Add a permission and select Microsoft Graph.

To check which permissions are required I used the below Microsoft Docs .

Microsoft Graph permissions reference – Microsoft Graph | Microsoft Docs

Select the permission type and the required permission in this case I want to be able to read groups, users and directory so.

Once the required permissions are added if they required admin permission those will need to be granted using the grant admin consent option below.

There are many different way’s to connect to Microsoft Graph but in this post we will be using client secret.

We will need the application ID

We will create a client secret

Give the client secret a name and set the expire in this case we will use 1 year.

There should now be client secret and the value is used to authenticate. (Take note of the value and save in secure location like a password vault or Azure Key vault as once you leave the app blade the value will be hidden and if you lose will have to be recreated.)

Once we have the above configured we can connect to GraphApi to generate a token. We will used Invoke-RestMethod.

The secret can be hardcoded but I decided to use read-host so that I could add the secret manually, as it’s not recommend to have any password/secret hardcoded in script.

Below is the command I used to get the token.

$ApplicationID = ""
$TenatDomainName = ""
$AccessSecret = Read-Host "Enter Secret"


$Body = @{    
Grant_Type    = "client_credentials"
Scope         = "https://graph.microsoft.com/.default"
client_Id     = $ApplicationID
Client_Secret = $AccessSecret


$ConnectGraph = Invoke-RestMethod -Uri "https://login.microsoftonline.com/$TenatDomainName/oauth2/v2.0/token" `
-Method POST -Body $Body

$token = $ConnectGraph.access_token

To verify we have a token run the variable $ConnectGraph to view.

Now that we have a token we can run a query against GraphAPI. The below we will be running for groups and selecting display name.

$GrapGroupUrl = 'https://graph.microsoft.com/v1.0/Groups/'
(Invoke-RestMethod -Headers @{Authorization = "Bearer $($token)"} -Uri $GrapGroupUrl -Method Get).value.displayName

To view some examples we can use Graph Explorer.

Graph Explorer – Microsoft Graph

In a future post we will be going through more query’s and automating tasks using GraphAPI.

Create Azure conditional access policy with named location

In this post we will be going through creating an Azure conditional access policy to restrict logging on to Azure / Office 365 from specific locations.

Conditional access policies are used to set requirements for accessing Azure or Office 365 resource, when using Named locations we can then set based on IP range, Trusted locations or Countries and regions.

Below diagram is from Microsoft on how conditionals access works.

What is Conditional Access in Azure Active Directory? | Microsoft Docs

Conceptual Conditional Access process flow

First step is to logon to Azure and go to Azure AD conditional access

Create a named location that will be used to restrict access.

Once in named location we can either create a location based on IP range or countries / regions. In this case we will be using a country.

Next we will create the conditional policy

Give the policy a name, we will be using a group to apply the policy but this could be change to all users or by directory role. (We can use select users or groups to apply the policy to a specific group as if applied to all users there could be a risk of locking yourself out of Azure portal)

Next we can apply the policy to specific apps or all apps. (If applying to all apps there will be a warning to not lock yourself out)

Next we set the conditions in this case we will be using location, I will be applying the policy to any location.

We can then set the named location as excluded in the policy so that it wont be applied if the access is coming from a country that is on the allowed named location.

This image has an empty alt attribute; its file name is image-26.png

Next we can set the access control settings, in this case we will block access but this could be changed to require MFA or other settings below.

Last step is to set session options to use conditional access app control, force sign-in frequency, Persistent browser session. We wont be setting any of the session settings as this policy is to block access.

There are three options report-only, on or off. To test the policy and check what will be blocked without running the risk of blocking real users set to report-only (This would be recommend for testing policy before rolling out to all users.)

Click create and the policy should now show and the state should be report-only.

To test if the policy is going to be applied we can check a users sign-ins, click on a signing log and go to Report-only, click on the three dots and show details.

This will show what part of the policy has been satisfied and not satisfied. Since the location has not been satisfied (as we are connecting from a excluded location) and the policy is not applied.

This image has an empty alt attribute; its file name is image-31.png

When the users tries to access from a country that is not in the allowed location, the result will be a failure and the users would be blocked from accessing.

Once testing has been completed and there are no unexpected blocks we can then go back to the conditional access policy and change the state to on, to apply the policy to users.

Now when a restricted user try’s to access an Azure / Office 365 resource from country not in the named location they will receive a message like the below.

Deploy MSI Application using Intune / MEM

In this post we will go over how to deploy an MSI application using Intune / MEM to a Windows 10 managed endpoint.

The below KB from Microsoft goes through deploying a line of business application

Add a Windows line-of-business app to Microsoft Intune | Microsoft Docs

To logon to the Microsoft Endpoint Manager admin center go to

https://endpoint.microsoft.com

Once in the MEM console go to Apps

Click on Add

Select app type: other > Line-of-business app

Once the wizard starts select app package and go to the MSI that is going to be deployed. We will be using Firefox in this post.

Next we can fill in the application details. The mandatory fields are Name, description and publisher.

Next we set the application deployment type either an install (required or available) or uninstall and deploy this to a certain group or all users / devices. We will be settings this deployment as available.

Last step is to review and create the app deployment

The application should now show in the Windows app blade.

Once the managed Windows device syncs with MEM the application should then show as available to install in the company portal.

If the application doesn’t show we can run a sync in MEM admin center or directly on the Windows 10 device.

To start the install click on the application.

This image has an empty alt attribute; its file name is image-13.png

Once installed we can check the application in MEM admin center to view compliance for the application deployment.

Export Remote Shares and Folder permissions using PowerShell

We have recently been looking to audit some Windows servers shares and permissions. I have previously used a script to export folder permissions, so some of this script will be from that previous script. The main difference is that we will be using WMI query to get the list of shares and a looping through specified servers.

To get the list of shares we will use the Win32_Share WMI class and filtered out the default shares.

Get-WmiObject -ComputerName $Server -Class win32_share -Filter "Description != 'Remote Admin' and Description != 'Default share' and Description != 'Remote IPC' and Description != 'Printer Drivers'" | Select-Object Name -ExpandProperty Name

The full script that will be used is located on my Github repository, see link below.

Scripts/Get-SharesAndPermissions.ps1 at master · TheSleepyAdmin/Scripts (github.com)

To run the script use the below and update the exportpath and servers. To add multiple server just a comma between server names.

.\Get-SharesAndPermissions.ps1 -ExportPath D:\Scripts\Folder_Permissions\Export -Servers Server1, Server2

Once the script has completed the result will be export to a csv in the exported folder path.

MECM Baseline: Check for GRUB vulnerability Windows 10

We needed to check for the GRUB vulnerability on our Windows 10 devices.

See advisory below:

https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/ADV200011

We have a few thousand devices to check, so checking manually was going to be a issue, we decided to use MECM baseline to run a script to check for devices that had the issue.

Microsoft give the below command in the advisory to check if the issue exist

[System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFI db).bytes) -match ‘Microsoft Corporation UEFI CA 2011’

If this command returns a true value the device is vulnerable.

To use the command in a baseline we used the try / catch in PowerShell to get a compliance response as the above command is a terminating error and wont return a result other wise.

try{
$GRUBCheck = [System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFI db).bytes) -match ‘Microsoft Corporation UEFI CA 2011’
if ($GRUBCheck -eq $true){
$complinant = ‘False’
}
else
{
$complinant = ‘True’
}
}
catch {
}

$complinant

This slideshow requires JavaScript.

Once we had the script we needed to create the configuration item and baseline.

To create the configuration item open MECM console > Assets and Compliance > Overview > Compliance settings > Configuration item

Create Configuration ItemGRUB3Select Windows Desktops and Servers (custom)GRUB4Select Windows 10 as the version of Windows that will be assessed. GRUB5Add a new settings item. GRUB6Give the item a name

Setting type to script and data type to string. Click Add script. GRUB7Put in the script above.grub8Next add in a compliance rule.GRUB10

Give the compliance rule a name, select the settings item we create earlier, set the value to True so that any devices that doesn’t have the vulnerability will return as compliant. GRUB11Once all the settings and compliance rules are configure following the wizard to complete GRUB12Next we need to create the configuration baseline

Assets and Compliance > Overview > Compliance settings > Configuration item GRUB13Give the compliance baseline a name

Click add and select configuration item.GRUB14Add the configuration item created earlier. GRUB15Click ok to complete the baseline GRUB16Once the baseline is configured, deploy the baseline to the required device collection GRUB17You can either wait for the next time the client does a machine policy retrieval or run the action manually from the client.  Once the client get the updated policy the baseline should show under configurations.GRUB18

Once evaluated we can check the deployment in MECM to find device that are compliant or non-compliant.GRUB19

VMware 6.7 PSC Decommission: Failed to get the PSC thumbprint

As part of our recent VMware 6.7 upgrade we where migrating from external PSC to embedded PSC.

All went fine until we tried to decommission the old PSC’s, when trying to view the thumbprint we got the below error:

Failed to get the PSC thumbprint. Ensure PSC port is correct.

PSCWith out getting the thumbprint you can continue.

We tested accessing the PSC using web browser,

using openssl from the vCenter appliance

openssl s_client -connect PSC01.domain.local:443

We also tested access using telnet to confirm if port 443 was open

curl -v telnet://PSC01.domain.local:443

All test came back fine and there was no ports being blocked.

In the end the issue was cause by using the short name for the vCenter servers https://VCSA/UI

When we changed to the full FQDN, https://VCSA.domain.local/UI this fixed the issue and we where able to complete the decommission. The issue looks to be related to the SAN name on the cert only having the FQDN.

PSC2

VMware ESXi 6.7 Upgrade: Missing dependencies VIBs Error

Recently we have been upgrading some VMware host from ESXi 6.0 to ESXi 6.7, We where applying the image using VMware update manager and a HPE custom ESXi image.

When applying the image we where getting incompatible warring and where not able to apply the image to upgrade ESXi on some hosts.

The issues was related to VIBS but they where not showing in the html 5 clientvibs1

To find the missing VIBS we ended up having to mount the ISO through HPE ILO and try a manual upgrade which did show the conflicting VIBS. vibs2

In our case the VIBS causing issue was the below.

Mellanox_bootbank_net-mlx4-core_1.9.9.8-10EM.510.0.0.799733
Mellanox_bootbank_net-mlx4-en_1.9.9.0-10EM.510.0.0.799733
Emulex_bootbank_scsi-lpfc820_10.5.55.0-10EM.500.0.0.472560
Mellanox_bootbank_net-mst_2.0.0.0-10EM.500.0.0.472560

The issue seem to be related to older hosts that where previously upgraded from ESXi 5.5.

Next we needed to find out if the VIBS where in use by either the storage or network adapters,

below is the VMware KB that explain how to do this.

https://kb.vmware.com/s/article/1027206

To get the list of storage and network adapter use the esxcli commands

esxcli storage core adapter list

esxcli network nic list

To get check the VIBS version we can use

esxcli software vib list | grep Mel

esxcli software vib list | grep scsi-lpfc820

vibs3

Once we know the version numbers of the VIBS, we just need to confirm they are not used and if not used remove them.

If they where in use we would need to look at creating a custom image or wipe and reload the ESXi host.

We use esxcli to view if the drivers are in use and what version each is

esxcli system module list | grep lpfc820, esxcli network nic get -n vmnic0

vibs4

Once we confirm that none of the VIBS are required the last step is to remove each one. Below is the KB from VMware on removing VIBS.

https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.upgrade.doc/GUID-7FFEBD91-5D82-4E32-93AB-F10D8BFFECAA.html

There might be some VIBS that have dependency’s on others in our case the net-mlx4-core needed to be removed after the net-mlx4-en as it was dependent on it.

To remove we use esxcli software vib remove -n ‘vibname’

vibs5

Reboot the ESXi host if required,

After the reboot, scan the host again from updates tab in VMware vSphere web client and it should now show as non-complaint and not incompatible. vibs6

The host should now upgrade as normal.

Migrating User Data Using USMT: MECM OSD Deployment

We have been doing migration’s from some old Windows 8.1 devices to Windows 10, We couldn’t do a direct upgrade as the devices where going from 32 to 64bit and needed to reformat the disk to UEFI.

We needed to migrate the users data as to limit the manual work for each device so decided to use User State Migration Tool (USMT).

It’s been a few years since I had to use USMT so thought it would be good to do a post on using USMT.

First I went through the Microsoft USMT documentation.

https://docs.microsoft.com/en-us/mem/configmgr/osd/deploy-use/refresh-an-existing-computer-with-a-new-version-of-windows

https://docs.microsoft.com/en-us/windows/deployment/usmt/usmt-technical-reference

https://docs.microsoft.com/en-us/windows/deployment/usmt/understanding-migration-xml-files

We are using MECM 2002 and have installed ADK 1903 which has USMT version 10.

Since the disk is going to be wiped and reformatted we wont be able to save the state to a local partition so we will need to use the state migration point to save content too.

First step is to install and configure state migration point in MECM.

Open the MECM console and go to Administration > site configuration > server and site system roles and select or add the server that will be used as the state migration point

Set the max free space in MB, GB or percentage. we also need to set the boundary group that this state migration point will be associated with.

This slideshow requires JavaScript.

Once the role has been installed we can start to create the task sequence to wipe and reload the device.

We won’t go through creating the TS from scratch in this post.

First step is to check is the capture state task. USMT6By default USMT will use the MigApp.xml and MigDocs.xml to set what will be copied.

To view the xml files you can go to the Windows ADK install location the default location is:

C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\User State Migration Tool\amd64

USMT8

USMT16We will be using the default xml but you can create custom xml’s to customize what files and setting need to be copied and restored.

To set the custom xml select customize how user profiles are captured and click on the files button to add in the xml files. USMT7Once the TS is setup we just need to deploy to a collection so that devices can be added and pick up the wipe and reload TS. USMT9Once deployed either wait for the TS to be picked up or run the client action machine policy and app deployment evaluation. USMT10USMT11We can check the smsts.log file to view what is happening we can see below that the files are being downloaded for the USMT package. USMT12Once the scanstate kicks off you can see the xml files that are used in the log. USMT13USMT14We can check the USMT folder on the state migration point to see if data is being copied. USMT15Once the image is applied, we can check the restore state task in the smsts log. The command will call the loadstate and cause use the default or custom xml files. USMT19USMT20USMT18When the TS has fully finished we can logon to Windows we should see that the devices is running Windows 10 and also has the file and settings migrated.USMT17

 

 

 

 

Export GPO assignments using PowerShell

Recently we wanted to do a review of all our Active Directory Group policy objects (GPO’s), we wanted to see what GPO’s where not assigned or what OU they where assigned, so that we could try to consolidate or remove unused GPO’s.

There was a couple of hundred in each domain, so I didn’t want to have to check each one manually.

There is a PowerShell command that you can run to list all GPO’s but it doesn’t show assignments

GPO1

To get more information on the GPO we can run the command Get-GPOReport which let’s you create either a HTML or XML report.

In this case I want to use an XML as I want to pull information from the xml report, the only issue is getting data directly from a XML report is a bit difficult.

GPO2To read an XML report in PowerShell you can use the typecast to XML by using [xml] in front of the variable which should make querying the content easier. GPO3The only part of the XML that I really want currently is the LinksTo, as this shows where the GPO is assigned. GPO4Once I had all this information I was then able to create the full script. I will put the script up on GitHub since it easier for people to copy the script file.

https://github.com/TheSleepyAdmin/Scripts/tree/master/ActiveDirectory/GPO

Below is the script running GPO5This is what the export will look like.GPO6

 

 

 

 

 

MECM 2002 Cloud Management Gateway Configuration

I have been looking at setting up MECM cloud management gateway (CMG) for a while but haven’t been able to, due to the need for PKI or Azure AD joined.

With the recent release of MECM 2002 this has added a new feature,  that allows token based authentication. I decided to do a test deployment in my lab to see how this would work before deploying to our production environment.

With a CMG we can manage clients that don’t regularly connect back to the cooperate network which has become more of a priority recently.

There is a cost for running the VM’s in Azure that will be used as the CMG and for outbound data transfers. Johan Arwidmark has done a good real world cost estimates for a CMG.

https://deploymentresearch.com/real-world-costs-for-using-a-cloud-management-gateway-cmg-with-configmgr/

First step should be to have a read of the docs for planning a CMG

https://docs.microsoft.com/en-us/mem/configmgr/core/clients/manage/cmg/plan-cloud-management-gateway

To use token base authentication require MECM 2002, so that is a pre-requisites for deploying a CMG in this way.

https://docs.microsoft.com/en-us/mem/configmgr/core/clients/deploy/deploy-clients-cmg-token

There will be some required permission in Azure also for deploying the CMG.

  • Account that has Global Admin and Subscription Owner roles.
  • Content filter rules to allow outbound access (If there is a proxy or Firewall filtering traffic)

There will also be a requirement for a cert to be applied to the CMG, it is  recommend to have a third party cert as it should automatically trusted by clients.

From looking through the documents below are the required URL https://docs.microsoft.com/en-us/mem/configmgr/core/plan-design/network/internet-endpoints#cmg-connection-point

*.cloudapp.net

*.blob.core.windows.net

login.microsoftonline.com

Once all pre-requisites have been confirmed we can start to configure the CMG. We will need to pick a unique cloud service name as this will be required later, easiest way to check if the name you have select is unique is to logon to Azure go to cloud services (classic)

CloudService

Once the cloud service has been checked,  then we need to configure the Azure services.

Logon to the MECM console and go to Administration > Cloud services > Azure services.

Click configure Azure servicesCMG1

Give the service a name and select cloud management CMG2

We need to configure a server and client applications CMG5Give the applications a name and sign in to Azure using a account with the required permission CMG3CMG4I just left user discovery enabled. CMG6Click next to finish configuring the Azure services. CMG7Once finished the Azure service should now be showing CMG8The Azure AD tenant should also show with both application we just created. CMG9

 

Once this has all been configured we can now start to setup the CMG.

To start the configuration go to Administration > Cloud services > Cloud Management Gateway

Click Create Cloud Management GatewayCMG10

Select AzurePublicCloud and sign in. (If the subscription ID doesn’t show it might be the account you are using is not an owner of the subscription.)CMG11

Now we need to configure the cloud services, this is where we will use the name we checked earlier.

  • Select the cert file, this needs to be a PFX with the private key (I am using one created on my internal CA but in production I will be using a third party CA like digicert or godaddy)
  • The cert name will be automatically set the deployment services name so this is why we should confirm the name before hand so we can generate the cert with the same name.
  • Select the region the CMG will be configured in
  • Either select a existing or create a new resource group (I chose new one to keep the CMG separate from my other Azure resources)
  • Select the required amount of VM’s this can go up to 16 (for high availability it recommend to configure 2 VM’s at least)
  • Tick the required security authentication I just ticked Enforce TLS 1.2
  • I also ticked using CMG and cloud distribution point

CMG12I left the alerting as default CMG13Just follow the wizard to complete. CMG15

Once completed the CMG should show as provisioning started 

This slideshow requires JavaScript.

We can also logon to Azure to verify the cloud service has been created. CMG18

After the CMG has been configured we then need to install the Cloud management connection point to connect MECM to the CMG.

Go to Administration > site configuration > servers and site system roles, Add the Cloud management gateway connection point to the primary site server in MECM

This slideshow requires JavaScript.

After the role has been configured we need to configure a few steps on the site server, management point and software update point (if installed and configured)

Open the management point properties and tick the allow configuration manager cloud management gateway traffic. (If the tick box for is greyed out there is an additional step required.) 

Go to Administration > site configuration > sites, then configure the primary site to use configuration manager generated certificate in communication security. Once this is done go back to the management point and the tick box should now be available.

CMG27

CMG23CMG24To allow the software update point to communicate with the CMG, tick the allow configuration manager cloud management gateway traffic. CMG26After this has been configured the clients should now pick up a the CMG as a internet based management point in the network tab of client agent properties. CMG25

Once the client moves off the internal network and does a location lookup we should see that the connection type will change to internet from internal. CMG29

We can also check the location services log to see if the CMG is being picked up. CMG28