{ "version": "https://jsonfeed.org/version/1.1", "user_comment": "This feed allows you to read the posts from this site in any feed reader that supports the JSON Feed format. To add this feed to your reader, copy the following URL -- https://rakhesh.com/feed/json -- and add it your reader.", "next_url": "https://rakhesh.com/feed/json?paged=2", "home_page_url": "https://rakhesh.com", "feed_url": "https://rakhesh.com/feed/json", "language": "en-US", "title": "rakhesh.com", "description": "rakhesh sasidharan's mostly techie oh-so-purpley blog", "icon": "https://i0.wp.com/rakhesh.com/wp-content/uploads/2023/01/cropped-IMG_0512_icon.jpg?fit=512%2C512&ssl=1", "items": [ { "id": "https://rakhesh.com/?p=7529", "url": "https://rakhesh.com/azure/delete-all-entra-id-app-registrations-and-enterprise-applications/", "title": "Delete ALL Entra ID App Registrations and Enterprise Applications", "content_html": "
Deleting a test tenant today and one of the pre-requisites before I can delete is to remove all App Registrations and Enterprise Applications.
\n\nUnfortunately there’s no select all and delete button in the GUI. So here’s what you do:
Connect-MgGraph -Scopes \"Application.ReadWrite.All\"
In the browser window that pops-up, sign in with a Global Admin or Application Admin account (Global Admin usually, coz you also have to delete all users from the tenant so you are likely left with just the Global Admin).
\nThen do:
Get-MgApplication | %{ Remove-MgApplication -Confirm:$false -ApplicationId $_.Id }\r\nGet-MgServicePrincipal | %{ Remove-MgServicePrincipal -ServicePrincipalId $_.Id -Confirm:$false }
You might get some errors with the latter, as some Enterprise Applications are from Microsoft. Like this:
Remove-MgServicePrincipal_Delete: Specified App Principal ID is Microsoft Internal.\r\n\r\nStatus: 400 (BadRequest)\r\nErrorCode: Request_BadRequest\r\nDate: 2024-03-03T13:23:48\r\n\r\nHeaders:\r\nCache-Control : no-cache\r\nTransfer-Encoding : chunked\r\nVary : Accept-Encoding\r\nStrict-Transport-Security : max-age=31536000\r\nrequest-id : a6e68670-b3ad-4b9d-bf04-54f121e9b672\r\nclient-request-id : 1b8d50f7-89cf-41e6-8e18-fd6335ffba8f\r\nx-ms-ags-diagnostic : {\"ServerInfo\":{\"DataCenter\":\"West Europe\",\"Slice\":\"E\",\"Ring\":\"5\",\"ScaleUnit\":\"004\",\"RoleInstance\":\"AM2PEPF0000BE54\"}}\r\nx-ms-resource-unit : 1\r\nDate : Sun, 03 Mar 2024 13:23:48 GM
In my case what I was left with were the Graph, SharePoint, and PnP Enterprise Applications. I could delete them from the portal.
\n", "content_text": "Deleting a test tenant today and one of the pre-requisites before I can delete is to remove all App Registrations and Enterprise Applications.\n\nUnfortunately there’s no select all and delete button in the GUI. So here’s what you do:Connect-MgGraph -Scopes \"Application.ReadWrite.All\"In the browser window that pops-up, sign in with a Global Admin or Application Admin account (Global Admin usually, coz you also have to delete all users from the tenant so you are likely left with just the Global Admin).\nThen do:Get-MgApplication | %{ Remove-MgApplication -Confirm:$false -ApplicationId $_.Id }\r\nGet-MgServicePrincipal | %{ Remove-MgServicePrincipal -ServicePrincipalId $_.Id -Confirm:$false }You might get some errors with the latter, as some Enterprise Applications are from Microsoft. Like this:Remove-MgServicePrincipal_Delete: Specified App Principal ID is Microsoft Internal.\r\n\r\nStatus: 400 (BadRequest)\r\nErrorCode: Request_BadRequest\r\nDate: 2024-03-03T13:23:48\r\n\r\nHeaders:\r\nCache-Control : no-cache\r\nTransfer-Encoding : chunked\r\nVary : Accept-Encoding\r\nStrict-Transport-Security : max-age=31536000\r\nrequest-id : a6e68670-b3ad-4b9d-bf04-54f121e9b672\r\nclient-request-id : 1b8d50f7-89cf-41e6-8e18-fd6335ffba8f\r\nx-ms-ags-diagnostic : {\"ServerInfo\":{\"DataCenter\":\"West Europe\",\"Slice\":\"E\",\"Ring\":\"5\",\"ScaleUnit\":\"004\",\"RoleInstance\":\"AM2PEPF0000BE54\"}}\r\nx-ms-resource-unit : 1\r\nDate : Sun, 03 Mar 2024 13:23:48 GMIn my case what I was left with were the Graph, SharePoint, and PnP Enterprise Applications. I could delete them from the portal.", "date_published": "2024-03-03T13:26:00+00:00", "date_modified": "2024-03-03T13:26:00+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7525", "url": "https://rakhesh.com/azure/psa-macos-clearing-cached-creds-of-office-365-apps/", "title": "PSA: macOS clearing cached creds of Office 365 apps", "content_html": "This one wasted a bunch of my time today. Eugh!
\nI was trying to open some docs on Word but each time it would try and sign in with creds of a test tenant rather than the real one. Even though I’d click on the option to sign in with another account, and enter the correct creds, something was amiss and it wouldn’t work. Tried doing all sorts of things like adding the real tenant as a OneDrive connection, deleting a bunch of entries in the Keychain, and deleting a bunch of folders within sub-folders of ~/Library
(the last two being the result of some Googling). Nothing helped. Even uninstalling re-installing didn’t (not that I was expecting it to).
I even signed out of the accounts in Word and other Office 365 apps but that didn’t help either.
\nFinally I stumbled upon:
\n\nAnd then:
\n\nAnd that did the trick!
\nPutting it here in case it helps someone else and saves them some time. Certainly wasted about 45 mins of my time today evening.
\n", "content_text": "This one wasted a bunch of my time today. Eugh!\nI was trying to open some docs on Word but each time it would try and sign in with creds of a test tenant rather than the real one. Even though I’d click on the option to sign in with another account, and enter the correct creds, something was amiss and it wouldn’t work. Tried doing all sorts of things like adding the real tenant as a OneDrive connection, deleting a bunch of entries in the Keychain, and deleting a bunch of folders within sub-folders of ~/Library (the last two being the result of some Googling). Nothing helped. Even uninstalling re-installing didn’t (not that I was expecting it to).\nI even signed out of the accounts in Word and other Office 365 apps but that didn’t help either.\nFinally I stumbled upon:\n\nAnd then:\n\nAnd that did the trick!\nPutting it here in case it helps someone else and saves them some time. Certainly wasted about 45 mins of my time today evening.", "date_published": "2024-03-01T17:18:52+00:00", "date_modified": "2024-03-01T17:18:52+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7518", "url": "https://rakhesh.com/azure/quickly-check-if-a-user-has-onedrive/", "title": "Quickly check if a user has OneDrive", "content_html": "Had a request to quickly confirm whether a user had OneDrive. I couldn’t be bothered to activate my admin roles to check properly. So here’s what I did.
\nI went to https://<tenant name>-my.sharepoint.com/personal/<upn>
where <upn>
has all special characters replaced. Thus firstname.lastname@mydomain.com
becomes firstname_lastname_mydomain_com
.
If the user has a OneDrive you will get an access denied message but if the user does not have OneDrive you get a different error.
\n\n | \n |
No OneDrive | \nYes OneDrive | \n
\n", "content_text": "Had a request to quickly confirm whether a user had OneDrive. I couldn’t be bothered to activate my admin roles to check properly. So here’s what I did.\nI went to https://<tenant name>-my.sharepoint.com/personal/<upn> where <upn> has all special characters replaced. Thus firstname.lastname@mydomain.com becomes firstname_lastname_mydomain_com.\nIf the user has a OneDrive you will get an access denied message but if the user does not have OneDrive you get a different error.\n\n\n\n\n\n\n\nNo OneDrive\nYes OneDrive\n\n\n\n ", "date_published": "2024-02-26T13:20:06+00:00", "date_modified": "2024-02-26T13:20:06+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "onedrive", "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7510", "url": "https://rakhesh.com/azure/bicep-stuck-on-registering-commands-in-vs-code/", "title": "Bicep stuck on \u2018Registering commands\u2019 in VS Code", "content_html": "
Had to work with Bicep today and VS Code was stuck on this for some reason:
\n\nActually, prior to this it was also throwing an error about being unable to decompile something from my clipboard. I didn’t capture that error, but that went away once I unticked this setting:
\n\nBut the other error refused to go away. Expanding the details didn’t show anything useful either.
\n\nSo I clicked View (in the menubar) and Output:
\n\nAnd in the pane that opened, went to Bicep:
\n\nHere’s what the output showed:
2024-02-24T10:31:37.294Z info: Current log level: debug.\r\n2024-02-24T10:31:37.297Z info: Acquiring dotnet runtime...\r\n2024-02-24T10:31:37.297Z info: Found config for 'dotnetAcquisitionExtension.existingDotnetPath': {\"extensionId\":\"ms-azuretools.vscode-bicep\",\"path\":\"/usr/local/share/dotnet/dotnet\"}\r\n2024-02-24T10:31:37.312Z debug: Found dotnet command at '/usr/local/share/dotnet/dotnet'.\r\n2024-02-24T10:31:37.312Z info: Launching Bicep language service...\r\n2024-02-24T10:31:37.313Z debug: Found language server at '/Users/xxx/.vscode/extensions/ms-azuretools.vscode-bicep-0.25.53/bicepLanguageServer/Bicep.LangServer.dll'.\r\nYou must install or update .NET to run this application.\r\n\r\nApp: /Users/rakhesh/.vscode/extensions/ms-azuretools.vscode-bicep-0.25.53/bicepLanguageServer/Bicep.LangServer.dll\r\nArchitecture: arm64\r\nFramework: 'Microsoft.NETCore.App', version '8.0.0' (arm64)\r\n.NET location: /usr/local/share/dotnet/\r\n\r\nThe following frameworks were found:\r\n 6.0.10 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 6.0.11 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 6.0.12 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 6.0.13 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 6.0.14 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 6.0.16 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 6.0.18 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 7.0.0 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 7.0.1 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 7.0.2 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 7.0.3 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 7.0.5 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n 7.0.7 at [/usr/local/share/dotnet/shared/Microsoft.NETCore.App]\r\n\r\nLearn about framework resolution:\r\nhttps://aka.ms/dotnet/app-launch-failed\r\n\r\nTo install missing framework, download:\r\nhttps://aka.ms/dotnet-core-applaunch?framework=Microsoft.NETCore.App&framework_version=8.0.0&arch=arm64&rid=osx.14-arm64
It couldn’t find .NET? That doesn’t make sense. I am pretty sure I have dotnet installed. Use dotnet --list-sdks
to see the list of installed SDKs and dotnet --list-runtimes
for the list of runtimes. And dotnet \u2014info
for everything. Here’s the output of the latter:
$ dotnet --info\r\n.NET SDK:\r\n Version: 8.0.101\r\n Commit: 6eceda187b\r\n Workload version: 8.0.100-manifests.2fd734c4\r\n\r\nRuntime Environment:\r\n OS Name: Mac OS X\r\n OS Version: 14.2\r\n OS Platform: Darwin\r\n RID: osx-arm64\r\n Base Path: /opt/homebrew/Cellar/dotnet/8.0.1/libexec/sdk/8.0.101/\r\n\r\n.NET workloads installed:\r\n Workload version: 8.0.100-manifests.2fd734c4\r\nThere are no installed workloads to display.\r\n\r\nHost:\r\n Version: 8.0.1\r\n Architecture: arm64\r\n Commit: bf5e279d92\r\n\r\n.NET SDKs installed:\r\n 8.0.101 [/opt/homebrew/Cellar/dotnet/8.0.1/libexec/sdk]\r\n\r\n.NET runtimes installed:\r\n Microsoft.AspNetCore.App 8.0.1 [/opt/homebrew/Cellar/dotnet/8.0.1/libexec/shared/Microsoft.AspNetCore.App]\r\n Microsoft.NETCore.App 8.0.1 [/opt/homebrew/Cellar/dotnet/8.0.1/libexec/shared/Microsoft.NETCore.App]\r\n\r\nOther architectures found:\r\n x64 [/usr/local/share/dotnet/x64]\r\n\r\nEnvironment variables:\r\n DOTNET_ROOT [/opt/homebrew/Cellar/dotnet/8.0.1/libexec]\r\n\r\nglobal.json file:\r\n Not found\r\n\r\nLearn more:\r\n https://aka.ms/dotnet/info\r\n\r\nDownload .NET:\r\n https://aka.ms/dotnet/download
One thing though, they all seem to be installed under /opt/homebrew/Cellar
, while the output from the Bicep extension above was looking at /usr/local/share/dotnet/
. That path is present in the output of dotnet -info
too, but as the x64 architecture. I was on my arm64 Mac though. Could that be the issue?
I remembered overriding some paths in the past and that’s when I was on my Intel iMac, so maybe that’s the reason? That time I had added the following to my VS Code settings file:
\"dotnetAcquisitionExtension.existingDotnetPath\": [\r\n {\r\n \"extensionId\": \"ms-azuretools.vscode-bicep\", \r\n \"path\": \"/usr/local/share/dotnet/dotnet\"\r\n },\r\n {\r\n \"extensionId\": \"msazurermtools.azurerm-vscode-tools\", \r\n \"path\": \"/usr/local/share/dotnet/dotnet\"\r\n }\r\n ],
So I removed that, restarted VS Code, and now there’s no errors!
\nI am a bit concerned though, coz the output log now shows this:
2024-02-24T10:56:04.909Z info: Current log level: debug.\r\n2024-02-24T10:56:04.911Z info: Acquiring dotnet runtime...\r\n2024-02-24T10:56:07.303Z debug: Found dotnet command at '/Users/xxx/Library/Application Support/Code/User/globalStorage/ms-dotnettools.vscode-dotnet-runtime/.dotnet/8.0.2~arm64/dotnet'.\r\n2024-02-24T10:56:07.303Z info: Launching Bicep language service...\r\n2024-02-24T10:56:07.303Z debug: Found language server at '/Users/xxx/.vscode/extensions/ms-azuretools.vscode-bicep-0.25.53/bicepLanguageServer/Bicep.LangServer.dll'.\r\n2024-02-24T10:56:07.820Z info: Bicep language service started.\r\n[Info - 10:56:07] Running on processId 64657
Is it downloading its own version of the runtime and using that? Sounds like it.
\nThe latest version of dotnet as of this blog post is 8.0.2 released on 15th Feb. Looks like Homebrew doesn’t know of it, as my system is still on 8.0.1. On my laptop /opt/homebrew/bin/dotnet
is a link to the current version in the Cellar ../Cellar/dotnet/8.0.1/bin/dotnet
.
I’d like to keep things consistent, so I added the snippet I removed back… but with the correct path.
\"dotnetAcquisitionExtension.existingDotnetPath\": [\r\n {\r\n \"extensionId\": \"ms-azuretools.vscode-bicep\", \r\n \"path\": \"/opt/homebrew/bin/dotnet\"\r\n },\r\n {\r\n \"extensionId\": \"msazurermtools.azurerm-vscode-tools\", \r\n \"path\": \"/opt/homebrew/bin/dotnet\"\r\n }\r\n ],
Restarted VS Code, and now it’s happy and uses the correct path.
2024-02-24T11:19:02.103Z info: Current log level: debug.\r\n2024-02-24T11:19:02.105Z info: Acquiring dotnet runtime...\r\n2024-02-24T11:19:02.106Z info: Found config for 'dotnetAcquisitionExtension.existingDotnetPath': {\"extensionId\":\"ms-azuretools.vscode-bicep\",\"path\":\"/opt/homebrew/bin/dotnet\"}\r\n2024-02-24T11:19:02.117Z debug: Found dotnet command at '/opt/homebrew/bin/dotnet'.\r\n2024-02-24T11:19:02.117Z info: Launching Bicep language service...\r\n2024-02-24T11:19:02.117Z debug: Found language server at '/Users/xxx/.vscode/extensions/ms-azuretools.vscode-bicep-0.25.53/bicepLanguageServer/Bicep.LangServer.dll'.\r\n2024-02-24T11:19:03.347Z info: Bicep language service started.\r\n[Info - 11:19:03] Running on processId 70847
One last thing, more as a note for myself. On macOS, with Homebrew, it is possible to install multiple versions of dotnet via brew install dotnet@8
\u00a0and brew install dotnet@7
etc.
One of our users was getting the following error today with Power Platform.
\n\nIrritatingly, Power Platform also gave a red herring error message like this:
\n\nThe 429 error is a result of throttling. Licensed users can do a maximum of 40k requests per 24 hours after which they start getting throttled. (The limit is smaller for Pay As You Go plans and unlicensed users. And it accumulates across licenses, so one could have a higher limit too). I was able to run a report via the steps in this document and confirm that this particular user was indeed having around 200k requests per 24 hours and.
\nOnly solution is to wait for the throttling period to end. And fix the Flow if there are inefficiencies. It’s also possible to purchase add-ons to increase the limits by 50k requests per hour, per add-on.
\n", "content_text": "One of our users was getting the following error today with Power Platform.\n\nIrritatingly, Power Platform also gave a red herring error message like this:\n\nThe 429 error is a result of throttling. Licensed users can do a maximum of 40k requests per 24 hours after which they start getting throttled. (The limit is smaller for Pay As You Go plans and unlicensed users. And it accumulates across licenses, so one could have a higher limit too). I was able to run a report via the steps in this document and confirm that this particular user was indeed having around 200k requests per 24 hours and.\nOnly solution is to wait for the throttling period to end. And fix the Flow if there are inefficiencies. It’s also possible to purchase add-ons to increase the limits by 50k requests per hour, per add-on.", "date_published": "2024-02-22T14:24:25+00:00", "date_modified": "2024-02-22T14:24:25+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "Power Platform" ] }, { "id": "https://rakhesh.com/?p=7495", "url": "https://rakhesh.com/linux-bsd/automatically-publishing-new-versions-of-my-graph-powershell-docker-image/", "title": "Automatically publishing new versions of my Graph PowerShell Docker image", "content_html": "A while ago I blogged about creating a Docker image with PowerShell and the latest version of Graph. Thing is, Microsoft keeps releasing new versions of the module pretty regularly and unless I remember to go and build a new version of the image each time, it is quickly outdated.
\nThen I came across this toot on Mastodon. He had setup something to update his Unbound DNS Docker image (interestingly, something I too had dabbled with a long time ago) each time NLnet Labs releases a new version of Unbound DNS. Here’s a link to his GitHub action which does this and the key thing is: 1) it runs on a schedule (I had been too lazy to figure out GitHub actions could do that!) and 2) it very smartly uses the APIs to check the version of his container vs the version of the Unbound DNS (which is released on GitHub, so he can query the releases) and only updates the image in case of changes. Nice!
\nI used his scheduler idea, but had to make some changes to the other bits to cater to my specific use case.
\nGetting the latest version of the PowerShell SDK from GitHub is easy. This command will give all the releases:
curl -s https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases
It’s in JSON format, so pipe it via jq
. The JSON is an array of entries like this:
{\r\n \"url\": \"https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases/96023619\",\r\n \"assets_url\": \"https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases/96023619/assets\",\r\n \"upload_url\": \"https://uploads.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases/96023619/assets{?name,label}\",\r\n \"html_url\": \"https://github.com/microsoftgraph/msgraph-sdk-powershell/releases/tag/1.24.0\",\r\n \"id\": 96023619,\r\n \"author\": {\r\n \"login\": \"peombwa\",\r\n \"id\": 7061532,\r\n \"node_id\": \"MDQ6VXNlcjcwNjE1MzI=\",\r\n \"avatar_url\": \"https://avatars.githubusercontent.com/u/7061532?v=4\",\r\n \"gravatar_id\": \"\",\r\n \"url\": \"https://api.github.com/users/peombwa\",\r\n \"html_url\": \"https://github.com/peombwa\",\r\n \"followers_url\": \"https://api.github.com/users/peombwa/followers\",\r\n \"following_url\": \"https://api.github.com/users/peombwa/following{/other_user}\",\r\n \"gists_url\": \"https://api.github.com/users/peombwa/gists{/gist_id}\",\r\n \"starred_url\": \"https://api.github.com/users/peombwa/starred{/owner}{/repo}\",\r\n \"subscriptions_url\": \"https://api.github.com/users/peombwa/subscriptions\",\r\n \"organizations_url\": \"https://api.github.com/users/peombwa/orgs\",\r\n \"repos_url\": \"https://api.github.com/users/peombwa/repos\",\r\n \"events_url\": \"https://api.github.com/users/peombwa/events{/privacy}\",\r\n \"received_events_url\": \"https://api.github.com/users/peombwa/received_events\",\r\n \"type\": \"User\",\r\n \"site_admin\": false\r\n },\r\n \"node_id\": \"RE_kwDOCno9Qs4FuTRD\",\r\n \"tag_name\": \"1.24.0\",\r\n \"target_commitish\": \"dev\",\r\n \"name\": \"1.24.0 Release\",\r\n \"draft\": false,\r\n \"prerelease\": false,\r\n \"created_at\": \"2023-03-23T14:06:32Z\",\r\n \"published_at\": \"2023-03-23T16:41:52Z\",\r\n \"assets\": [\r\n\r\n ],\r\n \"tarball_url\": \"https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/tarball/1.24.0\",\r\n \"zipball_url\": \"https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/zipball/1.24.0\",\r\n \"body\": \"### Release Notes\\r\\n- Refreshes module with the latest APIs #1895\\r\\n- Adds examples to help files #1692\",\r\n \"reactions\": {\r\n \"url\": \"https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases/96023619/reactions\",\r\n \"total_count\": 1,\r\n \"+1\": 0,\r\n \"-1\": 0,\r\n \"laugh\": 0,\r\n \"hooray\": 1,\r\n \"confused\": 0,\r\n \"heart\": 0,\r\n \"rocket\": 0,\r\n \"eyes\": 0\r\n }\r\n },
What we want is the name
property. The following can extract that:
curl -s https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases | jq '.[] | .name'
The '.[] | .name'
bit is the script to jq
. It tells it to read the array (.[]
) and extract name
. Output looks like this:
\"2.14.0\"\r\n\"2.14.1\"\r\n\"2.13.1\"\r\n\"2.13.0\"\r\n\"2.12.0\"\r\n\"2.11.1\"\r\n\"2.11.0\"\r\n\"2.10.0\"\r\n\"2.9.0\"\r\n\"2.8.0 Release\"\r\n\"2.6.1\"\r\n\"2.5.0\"\r\n\"2.4.0\"\r\n\"2.3.0\"\r\n\"2.2.0\"\r\n\"2.1.0\"\r\n\"2.0.0\"\r\n\"1.28.0 Release\"\r\n\"2.0.0-rc3\"\r\n\"2.0.0-rc1\"\r\n\"2.0.0-preview9\"\r\n\"1.27.0 Release\"\r\n\"1.26.0 Release\"\r\n\"1.25.0 Release\"\r\n\"2.0.0-preview8\"\r\n\"2.0.0-preview7\"\r\n\"1.24.0 Release\"\r\n\"2.0.0-preview6\"\r\n\"1.23.0 Release\"\r\n\"2.0.0-preview5\"
So I should sort it in reverse order, treating the text as numbers, then take the first element, and also remove the quotation marks.
curl -s https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases | jq '.[] | .name' | sort -Vr | head -n 1 | tr -d \"\\\"\"
The -Vr
switches to sort
tells it to treat them as numbers and do a reverse sort. The head -n 1
takes the first element. And tr -d
removes the double quotes.
Next up, how do I get the latest version of my Docker image? I was hoping I could query GitHub to get the latest version of the package somehow, but that doesn’t work. For one, GitHub doesn’t seem to have a way of getting all packages of a repo – all it can do is get all packages for an organization, or all packages for a user, and some variants of these – but all of these require authentication. Not a problem in itself, I generated a token to test things out, and used the API to get all versions of the packaged owned by me.
curl -s -H \"Authorization: Bearer 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\" -H \"Accept: application/vnd.github+json\" https://api.github.com/user/packages/container/powershell-msgraph/versions
(The token in the examples is not valid).
\nThe result is again JSON that looks like this:
{\r\n \"id\": 180843308,\r\n \"name\": \"sha256:dcf2eba746ab8b96a4576be0627ab05a0d5d9c436807322c292d0c3bb258c889\",\r\n \"url\": \"https://api.github.com/users/rakheshster/packages/container/powershell-msgraph/versions/180843308\",\r\n \"package_html_url\": \"https://github.com/users/rakheshster/packages/container/package/powershell-msgraph\",\r\n \"created_at\": \"2024-02-19T18:44:20Z\",\r\n \"updated_at\": \"2024-02-19T18:44:20Z\",\r\n \"description\": \"'PowerShell + MS Graph container'\",\r\n \"html_url\": \"https://github.com/users/rakheshster/packages/container/powershell-msgraph/180843308\",\r\n \"metadata\": {\r\n \"package_type\": \"container\",\r\n \"container\": {\r\n \"tags\": [\r\n \"2.13.1\"\r\n ]\r\n }\r\n }\r\n}
I can use the following to just extract version numbers. That’s the tags basically.
curl -s -H \"Authorization: Bearer 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\" -H \"Accept: application/vnd.github+json\" https://api.github.com/user/packages/container/powershell-msgraph/versions | jq '.[] | .metadata.container.tags | .[]'
Output looks like this:
\"2.14.1\"\r\n\"2.14.0\"\r\n\"2.13.0\"\r\n\"2.12.0\"\r\n\"2.13.1\"
Of course I’d have to reverse sort just to be sure, take only the first element, and remove the double quotes.
curl -s -H \"Authorization: Bearer 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\" -H \"Accept: application/vnd.github+json\" https://api.github.com/user/packages/container/powershell-msgraph/versions | jq '.[] | .metadata.container.tags | .[]' | sort -Vr | head -n 1 | tr -d \"\\\"\"
The jq
snippet here is a bit more involved. Read the array (.[]
), extract the property with the tags (.metadata.container.tags
) – which is itself an array, so read that (.[]
).
While I could have gone with this, I didn’t. Instead I decided to query DockerHub as I could just do it without a personal token etc.
curl -s 'https://hub.docker.com/v2/repositories/rakheshster/powershell-msgraph/tags' -H 'Content-Type: application/json' | jq
An example entry is as follows. All these are part of an array called results
.
{\r\n \"creator\": 5275378,\r\n \"id\": 569595532,\r\n \"images\": [\r\n {\r\n \"architecture\": \"amd64\",\r\n \"features\": \"\",\r\n \"variant\": null,\r\n \"digest\": \"sha256:fa9486e10c6bca45dbd97bd3fab0d572605ce515474288568aeafeabf0f7c9a7\",\r\n \"os\": \"linux\",\r\n \"os_features\": \"\",\r\n \"os_version\": null,\r\n \"size\": 240918555,\r\n \"status\": \"inactive\",\r\n \"last_pulled\": \"2024-01-15T19:36:18.295181Z\",\r\n \"last_pushed\": \"2023-12-16T23:07:04.105558Z\"\r\n },\r\n {\r\n \"architecture\": \"arm64\",\r\n \"features\": \"\",\r\n \"variant\": null,\r\n \"digest\": \"sha256:8e18f05bfd87db92364be195e667fcd4b6f4009f6d8e3f247ce1ba34bedff22a\",\r\n \"os\": \"linux\",\r\n \"os_features\": \"\",\r\n \"os_version\": null,\r\n \"size\": 240918555,\r\n \"status\": \"inactive\",\r\n \"last_pulled\": \"2024-01-15T19:36:18.308301Z\",\r\n \"last_pushed\": \"2023-12-16T23:07:04.301845Z\"\r\n },\r\n {\r\n \"architecture\": \"unknown\",\r\n \"features\": \"\",\r\n \"variant\": null,\r\n \"digest\": \"sha256:13736cedff20fc9a58d0765f85b5835dd776676eef651f540cba845bca937862\",\r\n \"os\": \"unknown\",\r\n \"os_features\": \"\",\r\n \"os_version\": null,\r\n \"size\": 10970,\r\n \"status\": \"inactive\",\r\n \"last_pulled\": \"2024-01-15T19:36:18.286013Z\",\r\n \"last_pushed\": \"2023-12-16T23:07:04.48484Z\"\r\n },\r\n {\r\n \"architecture\": \"unknown\",\r\n \"features\": \"\",\r\n \"variant\": null,\r\n \"digest\": \"sha256:48e99f7c4c7fdd783a9ad269516cb734149cacd854202d6345be7d46d664e8de\",\r\n \"os\": \"unknown\",\r\n \"os_features\": \"\",\r\n \"os_version\": null,\r\n \"size\": 10970,\r\n \"status\": \"inactive\",\r\n \"last_pulled\": \"2024-01-15T19:36:18.295297Z\",\r\n \"last_pushed\": \"2023-12-16T23:07:04.64608Z\"\r\n }\r\n ],\r\n \"last_updated\": \"2023-12-16T23:07:05.16389Z\",\r\n \"last_updater\": 5275378,\r\n \"last_updater_username\": \"rakheshster\",\r\n \"name\": \"2.0.0\",\r\n \"repository\": 22731596,\r\n \"full_size\": 240918555,\r\n \"v2\": true,\r\n \"tag_status\": \"inactive\",\r\n \"tag_last_pulled\": \"2024-01-15T19:36:18.308301Z\",\r\n \"tag_last_pushed\": \"2023-12-16T23:07:05.16389Z\",\r\n \"media_type\": \"application/vnd.oci.image.index.v1+json\",\r\n \"content_type\": \"image\",\r\n \"digest\": \"sha256:1affd92d1fa337a7df78ce40837b28d5d4261e6f6ef3e52219aa9e1d9346a18b\"\r\n}
To extract the name
(which has the version) I can do:
curl -s 'https://hub.docker.com/v2/repositories/rakheshster/powershell-msgraph/tags' -H 'Content-Type: application/json' | jq '.results[] | .name'
And the same drill as above to get just the latest version:
curl -s 'https://hub.docker.com/v2/repositories/rakheshster/powershell-msgraph/tags' -H 'Content-Type: application/json' | jq -r '.results[] | .name' | sort -Vr | head -n 1
Boom! So now I have the latest version in DockerHub. And the latest version of the module from Microsoft (via GitHub).
\nSince I have a workflow that already builds the container, I wanted to keep things consistent with that as much as possible. So I added a new job to the existing one. My existing one looks like this (this is just a snippet):
jobs:\r\n # Just one job here ...\r\n build-linux-box:\r\n runs-on: ubuntu-latest\r\n # the steps of my job\r\n env:\r\n # Coz of https://github.com/orgs/community/discussions/45969 & https://github.com/docker/build-push-action/issues/755\r\n BUILDX_NO_DEFAULT_ATTESTATIONS: 1\r\n steps:\r\n # Checkout the code from GitHib\r\n - name: Checkout Code\r\n uses: actions/checkout@v4\r\n\r\n # Setup QEMU for building other platforms\r\n # # https://github.com/docker/setup-qemu-action\r\n - name: Set up QEMU\r\n uses: docker/setup-qemu-action@v3
I added a new one above that checks if there’s a version difference.
jobs:\r\n # This one checks if any updates are needed\r\n update-check-job:\r\n runs-on: ubuntu-latest\r\n # Outputs of this job\r\n outputs:\r\n # This comes from the step below\r\n SHOULD_RUN: ${{ steps.EXIT_ACTION.outputs.SHOULD_RUN }}\r\n SDK_VERSION: ${{ steps.GET_VERSIONS.outputs.SDK_VERSION }}\r\n # The steps of this job\r\n steps:\r\n - name: Get versions\r\n id: GET_VERSIONS\r\n run: |\r\n echo SDK_VERSION=\"$(curl -s https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases | jq '.[] | .name' | sort -Vr | head -n 1 | tr -d \"\\\"\")\" >> $GITHUB_OUTPUT\r\n echo DOCKERHUB_VERSION=\"$(curl -s 'https://hub.docker.com/v2/repositories/rakheshster/powershell-msgraph/tags' -H 'Content-Type: application/json' | jq -r '.results[] | .name' | sort -Vr | head -n 1)\" >> $GITHUB_OUTPUT\r\n \r\n - name: Exit if no update available\r\n id: EXIT_ACTION\r\n run: |\r\n if [[ \"${{ steps.GET_VERSIONS.outputs.SDK_VERSION }}\" == \"${{ steps.GET_VERSIONS.outputs.DOCKERHUB_VERSION }}\" ]]; then\r\n echo \"No update needed\"\r\n echo SHOULD_RUN=false >> $GITHUB_OUTPUT\r\n else\r\n echo \"Needs updating to @{{ steps.GET_VERSIONS.outputs.SDK_VERSION }}\"\r\n echo SHOULD_RUN=true >> $GITHUB_OUTPUT\r\n fi\r\n\r\n # This one actually builds.\r\n # This is a copy paste of the docker-build-and-push.yaml action\r\n # But I change 'github.event.inputs.moduleversion' to 'needs.update-check-job.outputs.SDK_VERSION'\r\n build-linux-box:\r\n runs-on: ubuntu-latest\r\n needs: update-check-job\r\n if: needs.update-check-job.outputs.SHOULD_RUN == 'true'\r\n env:\r\n # Coz of https://github.com/orgs/community/discussions/45969 & https://github.com/docker/build-push-action/issues/755\r\n BUILDX_NO_DEFAULT_ATTESTATIONS: 1\r\n steps:\r\n # Checkout the code from GitHib\r\n - name: Checkout Code\r\n uses: actions/checkout@v4\r\n\r\n # Setup QEMU for building other platforms\r\n # # https://github.com/docker/setup-qemu-action\r\n - name: Set up QEMU\r\n uses: docker/setup-qemu-action@v3
What this new job does is check whether there’s a difference in versions. If yes, it sets an output variable SHOULD_RUN
to true
. The second job now only runs if that variable is set to true (needs.update-check-job.outputs.SHOULD_RUN == 'true'
).
Since I know the version of the module to target, I output that as part of the first job, and the second job uses that when building the container.
- name: Build and Push\r\n uses: docker/build-push-action@v5\r\n with:\r\n file: Dockerfile\r\n platforms: \"linux/amd64,linux/arm64\"\r\n # Only amd64 is supported for now in the Ubuntu image https://learn.microsoft.com/en-gb/powershell/scripting/install/installing-powershell-on-linux?view=powershell-7.4#ubuntu\r\n # thanks for the syntax https://github.com/docker/build-push-action/issues/557\r\n build-args: |\r\n GRAPH_VERSION=${{ needs.update-check-job.outputs.SDK_VERSION }}\r\n push: ${{ github.ref == 'refs/heads/main' }}\r\n outputs: type=image,name=target,annotation-index.org.opencontainers.image.description='PowerShell + MS Graph container'\r\n tags: |\r\n rakheshster/powershell-msgraph:${{ needs.update-check-job.outputs.SDK_VERSION }}\r\n ghcr.io/rakheshster/powershell-msgraph:${{ needs.update-check-job.outputs.SDK_VERSION }}
I am pretty pleased with it overall! To give credit, it was this StackOverflow post that gave me the idea of using multiple jobs. That felt neater than the other solutions.
\nAnd that’s it! Now GitHub Actions will automatically publish new versions of this image whenever there’s a new Graph module. As luck would have it, today version 2.14.1 was released and the image was automatically built. Nice!
\n", "content_text": "A while ago I blogged about creating a Docker image with PowerShell and the latest version of Graph. Thing is, Microsoft keeps releasing new versions of the module pretty regularly and unless I remember to go and build a new version of the image each time, it is quickly outdated.\nThen I came across this toot on Mastodon. He had setup something to update his Unbound DNS Docker image (interestingly, something I too had dabbled with a long time ago) each time NLnet Labs releases a new version of Unbound DNS. Here’s a link to his GitHub action which does this and the key thing is: 1) it runs on a schedule (I had been too lazy to figure out GitHub actions could do that!) and 2) it very smartly uses the APIs to check the version of his container vs the version of the Unbound DNS (which is released on GitHub, so he can query the releases) and only updates the image in case of changes. Nice!\nI used his scheduler idea, but had to make some changes to the other bits to cater to my specific use case.\nGetting the latest version of the PowerShell SDK from GitHub is easy. This command will give all the releases:curl -s https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releasesIt’s in JSON format, so pipe it via jq. The JSON is an array of entries like this:{\r\n \"url\": \"https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases/96023619\",\r\n \"assets_url\": \"https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases/96023619/assets\",\r\n \"upload_url\": \"https://uploads.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases/96023619/assets{?name,label}\",\r\n \"html_url\": \"https://github.com/microsoftgraph/msgraph-sdk-powershell/releases/tag/1.24.0\",\r\n \"id\": 96023619,\r\n \"author\": {\r\n \"login\": \"peombwa\",\r\n \"id\": 7061532,\r\n \"node_id\": \"MDQ6VXNlcjcwNjE1MzI=\",\r\n \"avatar_url\": \"https://avatars.githubusercontent.com/u/7061532?v=4\",\r\n \"gravatar_id\": \"\",\r\n \"url\": \"https://api.github.com/users/peombwa\",\r\n \"html_url\": \"https://github.com/peombwa\",\r\n \"followers_url\": \"https://api.github.com/users/peombwa/followers\",\r\n \"following_url\": \"https://api.github.com/users/peombwa/following{/other_user}\",\r\n \"gists_url\": \"https://api.github.com/users/peombwa/gists{/gist_id}\",\r\n \"starred_url\": \"https://api.github.com/users/peombwa/starred{/owner}{/repo}\",\r\n \"subscriptions_url\": \"https://api.github.com/users/peombwa/subscriptions\",\r\n \"organizations_url\": \"https://api.github.com/users/peombwa/orgs\",\r\n \"repos_url\": \"https://api.github.com/users/peombwa/repos\",\r\n \"events_url\": \"https://api.github.com/users/peombwa/events{/privacy}\",\r\n \"received_events_url\": \"https://api.github.com/users/peombwa/received_events\",\r\n \"type\": \"User\",\r\n \"site_admin\": false\r\n },\r\n \"node_id\": \"RE_kwDOCno9Qs4FuTRD\",\r\n \"tag_name\": \"1.24.0\",\r\n \"target_commitish\": \"dev\",\r\n \"name\": \"1.24.0 Release\",\r\n \"draft\": false,\r\n \"prerelease\": false,\r\n \"created_at\": \"2023-03-23T14:06:32Z\",\r\n \"published_at\": \"2023-03-23T16:41:52Z\",\r\n \"assets\": [\r\n\r\n ],\r\n \"tarball_url\": \"https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/tarball/1.24.0\",\r\n \"zipball_url\": \"https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/zipball/1.24.0\",\r\n \"body\": \"### Release Notes\\r\\n- Refreshes module with the latest APIs #1895\\r\\n- Adds examples to help files #1692\",\r\n \"reactions\": {\r\n \"url\": \"https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases/96023619/reactions\",\r\n \"total_count\": 1,\r\n \"+1\": 0,\r\n \"-1\": 0,\r\n \"laugh\": 0,\r\n \"hooray\": 1,\r\n \"confused\": 0,\r\n \"heart\": 0,\r\n \"rocket\": 0,\r\n \"eyes\": 0\r\n }\r\n },What we want is the name property. The following can extract that:curl -s https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases | jq '.[] | .name'The '.[] | .name' bit is the script to jq. It tells it to read the array (.[]) and extract name. Output looks like this:\"2.14.0\"\r\n\"2.14.1\"\r\n\"2.13.1\"\r\n\"2.13.0\"\r\n\"2.12.0\"\r\n\"2.11.1\"\r\n\"2.11.0\"\r\n\"2.10.0\"\r\n\"2.9.0\"\r\n\"2.8.0 Release\"\r\n\"2.6.1\"\r\n\"2.5.0\"\r\n\"2.4.0\"\r\n\"2.3.0\"\r\n\"2.2.0\"\r\n\"2.1.0\"\r\n\"2.0.0\"\r\n\"1.28.0 Release\"\r\n\"2.0.0-rc3\"\r\n\"2.0.0-rc1\"\r\n\"2.0.0-preview9\"\r\n\"1.27.0 Release\"\r\n\"1.26.0 Release\"\r\n\"1.25.0 Release\"\r\n\"2.0.0-preview8\"\r\n\"2.0.0-preview7\"\r\n\"1.24.0 Release\"\r\n\"2.0.0-preview6\"\r\n\"1.23.0 Release\"\r\n\"2.0.0-preview5\"So I should sort it in reverse order, treating the text as numbers, then take the first element, and also remove the quotation marks.curl -s https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases | jq '.[] | .name' | sort -Vr | head -n 1 | tr -d \"\\\"\"The -Vr switches to sort tells it to treat them as numbers and do a reverse sort. The head -n 1 takes the first element. And tr -d removes the double quotes.\nNext up, how do I get the latest version of my Docker image? I was hoping I could query GitHub to get the latest version of the package somehow, but that doesn’t work. For one, GitHub doesn’t seem to have a way of getting all packages of a repo – all it can do is get all packages for an organization, or all packages for a user, and some variants of these – but all of these require authentication. Not a problem in itself, I generated a token to test things out, and used the API to get all versions of the packaged owned by me.curl -s -H \"Authorization: Bearer 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\" -H \"Accept: application/vnd.github+json\" https://api.github.com/user/packages/container/powershell-msgraph/versions(The token in the examples is not valid).\nThe result is again JSON that looks like this:{\r\n \"id\": 180843308,\r\n \"name\": \"sha256:dcf2eba746ab8b96a4576be0627ab05a0d5d9c436807322c292d0c3bb258c889\",\r\n \"url\": \"https://api.github.com/users/rakheshster/packages/container/powershell-msgraph/versions/180843308\",\r\n \"package_html_url\": \"https://github.com/users/rakheshster/packages/container/package/powershell-msgraph\",\r\n \"created_at\": \"2024-02-19T18:44:20Z\",\r\n \"updated_at\": \"2024-02-19T18:44:20Z\",\r\n \"description\": \"'PowerShell + MS Graph container'\",\r\n \"html_url\": \"https://github.com/users/rakheshster/packages/container/powershell-msgraph/180843308\",\r\n \"metadata\": {\r\n \"package_type\": \"container\",\r\n \"container\": {\r\n \"tags\": [\r\n \"2.13.1\"\r\n ]\r\n }\r\n }\r\n}I can use the following to just extract version numbers. That’s the tags basically.curl -s -H \"Authorization: Bearer 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\" -H \"Accept: application/vnd.github+json\" https://api.github.com/user/packages/container/powershell-msgraph/versions | jq '.[] | .metadata.container.tags | .[]'Output looks like this:\"2.14.1\"\r\n\"2.14.0\"\r\n\"2.13.0\"\r\n\"2.12.0\"\r\n\"2.13.1\"Of course I’d have to reverse sort just to be sure, take only the first element, and remove the double quotes.curl -s -H \"Authorization: Bearer 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\" -H \"Accept: application/vnd.github+json\" https://api.github.com/user/packages/container/powershell-msgraph/versions | jq '.[] | .metadata.container.tags | .[]' | sort -Vr | head -n 1 | tr -d \"\\\"\"The jq snippet here is a bit more involved. Read the array (.[]), extract the property with the tags (.metadata.container.tags) – which is itself an array, so read that (.[]).\nWhile I could have gone with this, I didn’t. Instead I decided to query DockerHub as I could just do it without a personal token etc.curl -s 'https://hub.docker.com/v2/repositories/rakheshster/powershell-msgraph/tags' -H 'Content-Type: application/json' | jqAn example entry is as follows. All these are part of an array called results.{\r\n \"creator\": 5275378,\r\n \"id\": 569595532,\r\n \"images\": [\r\n {\r\n \"architecture\": \"amd64\",\r\n \"features\": \"\",\r\n \"variant\": null,\r\n \"digest\": \"sha256:fa9486e10c6bca45dbd97bd3fab0d572605ce515474288568aeafeabf0f7c9a7\",\r\n \"os\": \"linux\",\r\n \"os_features\": \"\",\r\n \"os_version\": null,\r\n \"size\": 240918555,\r\n \"status\": \"inactive\",\r\n \"last_pulled\": \"2024-01-15T19:36:18.295181Z\",\r\n \"last_pushed\": \"2023-12-16T23:07:04.105558Z\"\r\n },\r\n {\r\n \"architecture\": \"arm64\",\r\n \"features\": \"\",\r\n \"variant\": null,\r\n \"digest\": \"sha256:8e18f05bfd87db92364be195e667fcd4b6f4009f6d8e3f247ce1ba34bedff22a\",\r\n \"os\": \"linux\",\r\n \"os_features\": \"\",\r\n \"os_version\": null,\r\n \"size\": 240918555,\r\n \"status\": \"inactive\",\r\n \"last_pulled\": \"2024-01-15T19:36:18.308301Z\",\r\n \"last_pushed\": \"2023-12-16T23:07:04.301845Z\"\r\n },\r\n {\r\n \"architecture\": \"unknown\",\r\n \"features\": \"\",\r\n \"variant\": null,\r\n \"digest\": \"sha256:13736cedff20fc9a58d0765f85b5835dd776676eef651f540cba845bca937862\",\r\n \"os\": \"unknown\",\r\n \"os_features\": \"\",\r\n \"os_version\": null,\r\n \"size\": 10970,\r\n \"status\": \"inactive\",\r\n \"last_pulled\": \"2024-01-15T19:36:18.286013Z\",\r\n \"last_pushed\": \"2023-12-16T23:07:04.48484Z\"\r\n },\r\n {\r\n \"architecture\": \"unknown\",\r\n \"features\": \"\",\r\n \"variant\": null,\r\n \"digest\": \"sha256:48e99f7c4c7fdd783a9ad269516cb734149cacd854202d6345be7d46d664e8de\",\r\n \"os\": \"unknown\",\r\n \"os_features\": \"\",\r\n \"os_version\": null,\r\n \"size\": 10970,\r\n \"status\": \"inactive\",\r\n \"last_pulled\": \"2024-01-15T19:36:18.295297Z\",\r\n \"last_pushed\": \"2023-12-16T23:07:04.64608Z\"\r\n }\r\n ],\r\n \"last_updated\": \"2023-12-16T23:07:05.16389Z\",\r\n \"last_updater\": 5275378,\r\n \"last_updater_username\": \"rakheshster\",\r\n \"name\": \"2.0.0\",\r\n \"repository\": 22731596,\r\n \"full_size\": 240918555,\r\n \"v2\": true,\r\n \"tag_status\": \"inactive\",\r\n \"tag_last_pulled\": \"2024-01-15T19:36:18.308301Z\",\r\n \"tag_last_pushed\": \"2023-12-16T23:07:05.16389Z\",\r\n \"media_type\": \"application/vnd.oci.image.index.v1+json\",\r\n \"content_type\": \"image\",\r\n \"digest\": \"sha256:1affd92d1fa337a7df78ce40837b28d5d4261e6f6ef3e52219aa9e1d9346a18b\"\r\n}To extract the name (which has the version) I can do:curl -s 'https://hub.docker.com/v2/repositories/rakheshster/powershell-msgraph/tags' -H 'Content-Type: application/json' | jq '.results[] | .name'And the same drill as above to get just the latest version:curl -s 'https://hub.docker.com/v2/repositories/rakheshster/powershell-msgraph/tags' -H 'Content-Type: application/json' | jq -r '.results[] | .name' | sort -Vr | head -n 1Boom! So now I have the latest version in DockerHub. And the latest version of the module from Microsoft (via GitHub).\nSince I have a workflow that already builds the container, I wanted to keep things consistent with that as much as possible. So I added a new job to the existing one. My existing one looks like this (this is just a snippet):jobs:\r\n # Just one job here ...\r\n build-linux-box:\r\n runs-on: ubuntu-latest\r\n # the steps of my job\r\n env:\r\n # Coz of https://github.com/orgs/community/discussions/45969 & https://github.com/docker/build-push-action/issues/755\r\n BUILDX_NO_DEFAULT_ATTESTATIONS: 1\r\n steps:\r\n # Checkout the code from GitHib\r\n - name: Checkout Code\r\n uses: actions/checkout@v4\r\n\r\n # Setup QEMU for building other platforms\r\n # # https://github.com/docker/setup-qemu-action\r\n - name: Set up QEMU\r\n uses: docker/setup-qemu-action@v3I added a new one above that checks if there’s a version difference.jobs:\r\n # This one checks if any updates are needed\r\n update-check-job:\r\n runs-on: ubuntu-latest\r\n # Outputs of this job\r\n outputs:\r\n # This comes from the step below\r\n SHOULD_RUN: ${{ steps.EXIT_ACTION.outputs.SHOULD_RUN }}\r\n SDK_VERSION: ${{ steps.GET_VERSIONS.outputs.SDK_VERSION }}\r\n # The steps of this job\r\n steps:\r\n - name: Get versions\r\n id: GET_VERSIONS\r\n run: |\r\n echo SDK_VERSION=\"$(curl -s https://api.github.com/repos/microsoftgraph/msgraph-sdk-powershell/releases | jq '.[] | .name' | sort -Vr | head -n 1 | tr -d \"\\\"\")\" >> $GITHUB_OUTPUT\r\n echo DOCKERHUB_VERSION=\"$(curl -s 'https://hub.docker.com/v2/repositories/rakheshster/powershell-msgraph/tags' -H 'Content-Type: application/json' | jq -r '.results[] | .name' | sort -Vr | head -n 1)\" >> $GITHUB_OUTPUT\r\n \r\n - name: Exit if no update available\r\n id: EXIT_ACTION\r\n run: |\r\n if [[ \"${{ steps.GET_VERSIONS.outputs.SDK_VERSION }}\" == \"${{ steps.GET_VERSIONS.outputs.DOCKERHUB_VERSION }}\" ]]; then\r\n echo \"No update needed\"\r\n echo SHOULD_RUN=false >> $GITHUB_OUTPUT\r\n else\r\n echo \"Needs updating to @{{ steps.GET_VERSIONS.outputs.SDK_VERSION }}\"\r\n echo SHOULD_RUN=true >> $GITHUB_OUTPUT\r\n fi\r\n\r\n # This one actually builds.\r\n # This is a copy paste of the docker-build-and-push.yaml action\r\n # But I change 'github.event.inputs.moduleversion' to 'needs.update-check-job.outputs.SDK_VERSION'\r\n build-linux-box:\r\n runs-on: ubuntu-latest\r\n needs: update-check-job\r\n if: needs.update-check-job.outputs.SHOULD_RUN == 'true'\r\n env:\r\n # Coz of https://github.com/orgs/community/discussions/45969 & https://github.com/docker/build-push-action/issues/755\r\n BUILDX_NO_DEFAULT_ATTESTATIONS: 1\r\n steps:\r\n # Checkout the code from GitHib\r\n - name: Checkout Code\r\n uses: actions/checkout@v4\r\n\r\n # Setup QEMU for building other platforms\r\n # # https://github.com/docker/setup-qemu-action\r\n - name: Set up QEMU\r\n uses: docker/setup-qemu-action@v3What this new job does is check whether there’s a difference in versions. If yes, it sets an output variable SHOULD_RUN to true. The second job now only runs if that variable is set to true (needs.update-check-job.outputs.SHOULD_RUN == 'true').\nSince I know the version of the module to target, I output that as part of the first job, and the second job uses that when building the container.- name: Build and Push\r\n uses: docker/build-push-action@v5\r\n with:\r\n file: Dockerfile\r\n platforms: \"linux/amd64,linux/arm64\"\r\n # Only amd64 is supported for now in the Ubuntu image https://learn.microsoft.com/en-gb/powershell/scripting/install/installing-powershell-on-linux?view=powershell-7.4#ubuntu\r\n # thanks for the syntax https://github.com/docker/build-push-action/issues/557\r\n build-args: |\r\n GRAPH_VERSION=${{ needs.update-check-job.outputs.SDK_VERSION }}\r\n push: ${{ github.ref == 'refs/heads/main' }}\r\n outputs: type=image,name=target,annotation-index.org.opencontainers.image.description='PowerShell + MS Graph container'\r\n tags: |\r\n rakheshster/powershell-msgraph:${{ needs.update-check-job.outputs.SDK_VERSION }}\r\n ghcr.io/rakheshster/powershell-msgraph:${{ needs.update-check-job.outputs.SDK_VERSION }}I am pretty pleased with it overall! To give credit, it was this StackOverflow post that gave me the idea of using multiple jobs. That felt neater than the other solutions.\nAnd that’s it! Now GitHub Actions will automatically publish new versions of this image whenever there’s a new Graph module. As luck would have it, today version 2.14.1 was released and the image was automatically built. Nice!", "date_published": "2024-02-20T18:26:46+00:00", "date_modified": "2024-02-20T18:32:57+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "curl", "Docker", "microsoft graph", "Linux & BSD" ] }, { "id": "https://rakhesh.com/?p=7491", "url": "https://rakhesh.com/linux-bsd/mapping-optionleft-arrow-to-go-back-a-word/", "title": "Mapping Option+left arrow to go back a word", "content_html": "On macOS, using bash. I want to map the Option+left arrow
keys to go back a word, and Option+right arrow
keys to go forward a word.
Am sure this is common knowledge, but I wasn’t sure what to do. Thanks to this StackOverflow post, I figured it out.
\nIn the terminal, run cat
and then press Option+left arrow
. This appears as ^[^[[D
. The ^[
bit is your escape key basically. In bash you’d represent it as \\e
. So ^[^[[D
translates to \\e\\e[D
in bash.
Ditto for Option+right arrow
which translates to \\e\\e[C
in bash.
Armed with these two pieces of info, you can use the builtin bind
command to map them to back and forward word movements.
bind '\"\\e\\e[D\": backward-word'\r\nbind '\"\\e\\e[C\": forward-word'
Add these to .bash_profile
(or .bashrc
) and you are in business.
Extra info: By default the Esc+b
and Esc+f
keys are bound to these two. That can be seen by the bind -q
command.
$ bind -q backward-word\r\nbackward-word can be invoked via \"\\eb\".\r\n\r\n$ bind -q forward-word\r\nforward-word can be invoked via \"\\ef\".
To make this useful in iTerm2 on macOS, one can map the Option
key to the Esc+
key. This does not map the Option
key to the Esc
key, but the Esc+
action – which basically means Esc
plus whatever key you press. Treats it like a modifier basically.
I wasted an inordinate amount of time chasing this issue. Hopefully it saves others.
\nI wanted to create Apple Enrollment profiles in Intune using Graph PowerShell (or even just Graph API). Creating is easy, just use the beta cmdlets like this:
New-MgBetaDeviceManagementAppleUserInitiatedEnrollmentProfile -Platform 'iOS' -DefaultEnrollmentType 'device' -DisplayName \"<insert name>\" -Description \"<insert description>\"
You do need to give a description, even though it’s optional in the portal. Else the cmdlet throws an error.
\nAssigning it to someone is a different story though – doesn’t work! Through a lot of trial and error I figured out the correct cmdlets to do this:
$target = @{\r\n '@odata.type' = \"#microsoft.graph.groupAssignmentTarget\"\r\n 'deviceAndAppManagementAssignmentFilterId' = $null\r\n 'deviceAndAppManagementAssignmentFilterType' = 'none'\r\n 'groupId' = '<put entra group Id>'\r\n}\r\n\r\nNew-MgBetaDeviceManagementAppleUserInitiatedEnrollmentProfileAssignment -AppleUserInitiatedEnrollmentProfileId '<put profile Id>' -Target $target
The documentation is useless and not helpful. But that in itself would have been fine, except that even this does not work. You get errors like this:
New-MgBetaDeviceManagementAppleUserInitiatedEnrollmentProfileAssignment_CreateExpanded: {\r\n \"_version\": 3,\r\n \"Message\": \"An internal server error has occurred - Operation ID (for customer support): 00000000-0000-0000-0000-000000000000 - Activity ID: 6be199a3-0fb5-4a3a-a0be-462341e3e050 - Url: https://fef.msua02.manage.microsoft.com/DeviceEnrollmentFE/StatelessDeviceEnrollmentFEService/deviceManagement/appleUserInitiatedEnrollmentProfiles('04719205-e852-461a-bb68-46c668cb7c28')/assignments?api-version=5023-06-28\",\r\n \"CustomApiErrorPhrase\": \"\",\r\n \"RetryAfter\": null,\r\n \"ErrorSourceService\": \"\",\r\n \"HttpHeaders\": \"{}\"\r\n}\r\n\r\nStatus: 500 (InternalServerError)\r\nErrorCode: InternalServerError\r\nDate: 2024-01-29T14:32:26\r\n\r\nHeaders:\r\nTransfer-Encoding : chunked\r\nVary : Accept-Encoding\r\nStrict-Transport-Security : max-age=31536000\r\nrequest-id : a8faebaa-91f0-43dc-a11d-7f2616fba1bf\r\nclient-request-id : 6bf199a3-0fb5-4a3a-a0be-462341e3e050\r\nx-ms-ags-diagnostic : {\"ServerInfo\":{\"DataCenter\":\"US East\",\"Slice\":\"E\",\"Ring\":\"5\",\"ScaleUnit\":\"001\",\"RoleInstance\":\"YT1PEPF00001D90\"}}\r\nDate : Mon, 29 Jan 2024 14:32:26 GM
I tried other variants like:
$body = @{\r\n 'target' = @{\r\n '@odata.type' = \"#microsoft.graph.groupAssignmentTarget\"\r\n 'deviceAndAppManagementAssignmentFilterId' = $null\r\n 'deviceAndAppManagementAssignmentFilterType' = 'none'\r\n 'groupId' = 'put entra group Id'\r\n }\r\n}\r\n\r\nNew-MgBetaDeviceManagementAppleUserInitiatedEnrollmentProfileAssignment -AppleUserInitiatedEnrollmentProfileId '<put profile Id>' -BodyParameter $body
But no use. Ditto if I try Invoke-MgGraphRequest
or Invoke-RestMethod
directly. They all fail!
Ok, and what about if I want to delete one of these via PowerShell? Same error:
Remove-MgBetaDeviceManagementAppleUserInitiatedEnrollmentProfile_Delete: {\r\n \"_version\": 3,\r\n \"Message\": \"An internal server error has occurred - Operation ID (for customer support): 00000000-0000-0000-0000-000000000000 - Activity ID: 533345e0-cae9-4711-85e8-cb55d7a16e41 - Url: https://fef.msua02.manage.microsoft.com/DeviceEnrollmentFE/StatelessDeviceEnrollmentFEService/deviceManagement/appleUserInitiatedEnrollmentProfiles('636b9e2b-f762-4427-bfec-0fd76323750a')?api-version=5023-06-28\",\r\n \"CustomApiErrorPhrase\": \"\",\r\n \"RetryAfter\": null,\r\n \"ErrorSourceService\": \"\",\r\n \"HttpHeaders\": \"{}\"\r\n}\r\n\r\nStatus: 500 (InternalServerError)\r\nErrorCode: InternalServerError\r\nDate: 2024-01-29T14:22:19\r\n\r\nHeaders:\r\nTransfer-Encoding : chunked\r\nVary : Accept-Encoding\r\nStrict-Transport-Security : max-age=31536000\r\nrequest-id : cc10c124-2666-4db1-b58c-aa327e32a382\r\nclient-request-id : 133341e0-cae9-4711-85e8-cb55d7a16e42\r\nx-ms-ags-diagnostic : {\"ServerInfo\":{\"DataCenter\":\"US East\",\"Slice\":\"E\",\"Ring\":\"5\",\"ScaleUnit\":\"000\",\"RoleInstance\":\"TO1PEPF000051D9\"}}\r\nDate : Mon, 29 Jan 2024 14:22:18 GM
Madness!
\nCrazy thing is both operations work fine via the portal. I use Firefox, so if I right click the page, go to Inspect, and then the Network tab I can see the operations working.
\nHere’s delete, for instance.
\n\nAnd here’s a group assignment:
\n\nAnd here’s the request body that Firefox sends:
\n\nEverything matches what I am doing. Heck, I even copy pasted the request as is from Firefox and tried but it doesn’t work.
\nWorse, if I hit Resend:
\n\nThat too works!
\nOut of frustration I tried copying the headers in the request Firefox makes and adding them to my Invoke-RestMethod
requests, but nothing helped. What finally helped though, was copying the bearer token from Firefox and using that in Graph. That is to say, copy the entirety of the highlighted text:
Paste it into PowerShell thus and connect:
$accessToken = '<paste>' | ConvertTo-SecureString -AsPlainText\r\nConnect-MgGraph -AccessToken $accessToken
Now all the cmdlets above that didn’t work run successfully! Magic.
\nI don’t know why this works but the way I was trying previously didn’t. I was using an App Registration with pretty much the same permissions as what I see in this access token (difference being the App Registration had application permissions while the token had delegated permissions) so I am not sure what’s different (except the access token being for the Intune portal and maybe that matters). But at least this way I can use PowerShell to manipulate things, rather than use the portal. It won’t work for any scripts, but is useful to create a bunch of profiles for instance or do assignments.
\n", "content_text": "I wasted an inordinate amount of time chasing this issue. Hopefully it saves others.\nI wanted to create Apple Enrollment profiles in Intune using Graph PowerShell (or even just Graph API). Creating is easy, just use the beta cmdlets like this:New-MgBetaDeviceManagementAppleUserInitiatedEnrollmentProfile -Platform 'iOS' -DefaultEnrollmentType 'device' -DisplayName \"<insert name>\" -Description \"<insert description>\"You do need to give a description, even though it’s optional in the portal. Else the cmdlet throws an error.\nAssigning it to someone is a different story though – doesn’t work! Through a lot of trial and error I figured out the correct cmdlets to do this:$target = @{\r\n '@odata.type' = \"#microsoft.graph.groupAssignmentTarget\"\r\n 'deviceAndAppManagementAssignmentFilterId' = $null\r\n 'deviceAndAppManagementAssignmentFilterType' = 'none'\r\n 'groupId' = '<put entra group Id>'\r\n}\r\n\r\nNew-MgBetaDeviceManagementAppleUserInitiatedEnrollmentProfileAssignment -AppleUserInitiatedEnrollmentProfileId '<put profile Id>' -Target $targetThe documentation is useless and not helpful. But that in itself would have been fine, except that even this does not work. You get errors like this:New-MgBetaDeviceManagementAppleUserInitiatedEnrollmentProfileAssignment_CreateExpanded: {\r\n \"_version\": 3,\r\n \"Message\": \"An internal server error has occurred - Operation ID (for customer support): 00000000-0000-0000-0000-000000000000 - Activity ID: 6be199a3-0fb5-4a3a-a0be-462341e3e050 - Url: https://fef.msua02.manage.microsoft.com/DeviceEnrollmentFE/StatelessDeviceEnrollmentFEService/deviceManagement/appleUserInitiatedEnrollmentProfiles('04719205-e852-461a-bb68-46c668cb7c28')/assignments?api-version=5023-06-28\",\r\n \"CustomApiErrorPhrase\": \"\",\r\n \"RetryAfter\": null,\r\n \"ErrorSourceService\": \"\",\r\n \"HttpHeaders\": \"{}\"\r\n}\r\n\r\nStatus: 500 (InternalServerError)\r\nErrorCode: InternalServerError\r\nDate: 2024-01-29T14:32:26\r\n\r\nHeaders:\r\nTransfer-Encoding : chunked\r\nVary : Accept-Encoding\r\nStrict-Transport-Security : max-age=31536000\r\nrequest-id : a8faebaa-91f0-43dc-a11d-7f2616fba1bf\r\nclient-request-id : 6bf199a3-0fb5-4a3a-a0be-462341e3e050\r\nx-ms-ags-diagnostic : {\"ServerInfo\":{\"DataCenter\":\"US East\",\"Slice\":\"E\",\"Ring\":\"5\",\"ScaleUnit\":\"001\",\"RoleInstance\":\"YT1PEPF00001D90\"}}\r\nDate : Mon, 29 Jan 2024 14:32:26 GMI tried other variants like:$body = @{\r\n 'target' = @{\r\n '@odata.type' = \"#microsoft.graph.groupAssignmentTarget\"\r\n 'deviceAndAppManagementAssignmentFilterId' = $null\r\n 'deviceAndAppManagementAssignmentFilterType' = 'none'\r\n 'groupId' = 'put entra group Id'\r\n }\r\n}\r\n\r\nNew-MgBetaDeviceManagementAppleUserInitiatedEnrollmentProfileAssignment -AppleUserInitiatedEnrollmentProfileId '<put profile Id>' -BodyParameter $bodyBut no use. Ditto if I try Invoke-MgGraphRequest or Invoke-RestMethod directly. They all fail!\nOk, and what about if I want to delete one of these via PowerShell? Same error:Remove-MgBetaDeviceManagementAppleUserInitiatedEnrollmentProfile_Delete: {\r\n \"_version\": 3,\r\n \"Message\": \"An internal server error has occurred - Operation ID (for customer support): 00000000-0000-0000-0000-000000000000 - Activity ID: 533345e0-cae9-4711-85e8-cb55d7a16e41 - Url: https://fef.msua02.manage.microsoft.com/DeviceEnrollmentFE/StatelessDeviceEnrollmentFEService/deviceManagement/appleUserInitiatedEnrollmentProfiles('636b9e2b-f762-4427-bfec-0fd76323750a')?api-version=5023-06-28\",\r\n \"CustomApiErrorPhrase\": \"\",\r\n \"RetryAfter\": null,\r\n \"ErrorSourceService\": \"\",\r\n \"HttpHeaders\": \"{}\"\r\n}\r\n\r\nStatus: 500 (InternalServerError)\r\nErrorCode: InternalServerError\r\nDate: 2024-01-29T14:22:19\r\n\r\nHeaders:\r\nTransfer-Encoding : chunked\r\nVary : Accept-Encoding\r\nStrict-Transport-Security : max-age=31536000\r\nrequest-id : cc10c124-2666-4db1-b58c-aa327e32a382\r\nclient-request-id : 133341e0-cae9-4711-85e8-cb55d7a16e42\r\nx-ms-ags-diagnostic : {\"ServerInfo\":{\"DataCenter\":\"US East\",\"Slice\":\"E\",\"Ring\":\"5\",\"ScaleUnit\":\"000\",\"RoleInstance\":\"TO1PEPF000051D9\"}}\r\nDate : Mon, 29 Jan 2024 14:22:18 GMMadness!\nCrazy thing is both operations work fine via the portal. I use Firefox, so if I right click the page, go to Inspect, and then the Network tab I can see the operations working.\nHere’s delete, for instance.\n\nAnd here’s a group assignment:\n\nAnd here’s the request body that Firefox sends:\n\nEverything matches what I am doing. Heck, I even copy pasted the request as is from Firefox and tried but it doesn’t work.\nWorse, if I hit Resend:\n\nThat too works!\nOut of frustration I tried copying the headers in the request Firefox makes and adding them to my Invoke-RestMethod requests, but nothing helped. What finally helped though, was copying the bearer token from Firefox and using that in Graph. That is to say, copy the entirety of the highlighted text:\n\nPaste it into PowerShell thus and connect:$accessToken = '<paste>' | ConvertTo-SecureString -AsPlainText\r\nConnect-MgGraph -AccessToken $accessTokenNow all the cmdlets above that didn’t work run successfully! Magic.\nI don’t know why this works but the way I was trying previously didn’t. I was using an App Registration with pretty much the same permissions as what I see in this access token (difference being the App Registration had application permissions while the token had delegated permissions) so I am not sure what’s different (except the access token being for the Intune portal and maybe that matters). But at least this way I can use PowerShell to manipulate things, rather than use the portal. It won’t work for any scripts, but is useful to create a bunch of profiles for instance or do assignments.", "date_published": "2024-01-29T14:58:38+00:00", "date_modified": "2024-01-29T14:58:38+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "intune", "microsoft graph", "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7473", "url": "https://rakhesh.com/azure/adding-removing-ous-to-azure-ad-connect-via-powershell/", "title": "Adding Removing OUs to Azure AD Connect via PowerShell", "content_html": "I wanted to update the OUs synced from some of my connectors in Azure AD. It’s easy to do it via the GUI but there was nothing online on how to do it via PowerShell.
\nI asked ChatGPT and it hallucinated the hell out by making new cmdlets. :)
\n\nOr:
\n\nOr:
\n\nOr:
\n\nAmazing, the confidence with which is can make up non existent cmdlets.
\nAnyways, inspired by this blog post I went around exploring every cmdlet in the module to see which one might do the trick. And finally stumbled upon this:
# Pause syncing if it is running, else everything that follows gives the impression it works but doesn't actually do anything\r\nif ((Get-ADSyncScheduler).SyncCycleInProgress) {\r\n\tStop-ADSyncSyncCycle\r\n}\r\n\r\n# Get the existing connectors\r\n$connectors = Get-ADSyncConnector\r\n\r\n# To add an OU in the inclusion list of one of the connectors\r\n$connectors[4].Partitions.ConnectorPartitionScope.ContainerExclusionList.Add(\"OU=XXX,DC=XXX,DC=com\")\r\nAdd-ADSyncConnector -Connector $connectors[4]
Then do a full sync so the OU is imported.
Set-ADSyncSchedulerConnectorOverride -Connector $connectors[4].Identifier -FullSyncRequired $true\r\nStart-ADSyncSyncCycle -PolicyType Delta
And to remove:
$connectors[4].Partitions.ConnectorPartitionScope.ContainerExclusionList.Remove(\"OU=XXX,DC=XXX,DC=cohenlaw,DC=com\")\r\nAdd-ADSyncConnector -Connector $connectors[4]
Didn’t realize the Add-ADSyncConnector
can both add or update. While Googling more on that cmdlet I also came across this blog post – linking it here in case it’s of use to me later.
This is a continuation of an older post and an example.
\nThis and this posts from Microsoft are useful references too.
\nI want to delegate the ability to do admin consents to certain Graph permissions to some of my admins. In this case the “Sites.Selected” Graph API permission which typically needs a Global Admin to do the consent. To do this I have to create a custom app consent policy and a custom role that includes this app consent policy.
\nFirst, connect to Graph with the following scopes.
Connect-MgGraph -Scopes \"Policy.Read.PermissionGrant\",\"Policy.ReadWrite.PermissionGrant\",\"RoleManagement.ReadWrite.Directory\"
Get the Microsoft Graph service principal.
$servicePrincipal = Get-MgServicePrincipal -All | Where-Object { $_.DisplayName -eq \"Microsoft Graph\" }
For reference, here’s an example of the output:
> Get-MgServicePrincipal -All | Where-Object { $_.DisplayName -match \"Graph\" }\r\n\r\nDisplayName Id AppId SignInAudience ServicePrincipalType\r\n----------- -- ----- -------------- --------------------\r\nIDML Graph Resolver Service and CAD 2362f192-9721-4089-b2c9-6acf3e9ce553 d88a361a-d488-4271-a13f-a83df7dd99c2 AzureADMultipleOrgs Application\r\nGraph Data Connect App Registration 2a911a15-76e9-4e98-a1ee-f9f45bd6eba2 52e6d66a-9b02-477b-ad84-01c7d088f081 AzureADMyOrg Application\r\nExchange Office Graph Client for AAD - Noninteractive 2f4d6758-e00b-4037-a933-8b5224f00489 765fe668-04e7-42ba-aec0-2c96f1d8b652 AzureADMultipleOrgs Application\r\nMicrosoft Graph 327ba63b-334e-4004-bb30-20a607de4098 00000003-0000-0000-c000-000000000000 AzureADMultipleOrgs Application\r\nAzure Graph 40a924a5-b3b2-45d3-b466-cc5b2cbf9884 dbcbd02a-d7c4-42fb-8c27-b07e5118b848 AzureADMultipleOrgs Application\r\nExchange Office Graph Client for AAD - Interactive 57943d81-ce4c-4a80-ae3f-56ce03c6a8fd 6da466b6-1d13-4a2c-97bd-51a99e8d4d74 AzureADMultipleOrgs Application\r\nAudit GraphAPI Application 68420e79-2754-4dbd-9819-7049a2820601 4bfd5d66-9285-44a1-bb14-14953e8cdf5e AzureADMultipleOrgs Application\r\nMicrosoft Graph PowerShell 70a6c76d-5c6b-41ca-bb81-56d6f0360ec0 14d82eec-204b-4c2f-b7e8-296a70dab67e AzureADandPersonal\u2026 Application\r\nMicrosoft Graph Change Tracking 895b3d4b-d0b6-4102-8e01-d19ad243a7df 0bf30f3b-4a52-48df-9a82-234910c4a086 AzureADMultipleOrgs Application\r\nMicrosoft Teams Graph Service 9479fa7c-c94a-4c73-8599-92b762ac0029 ab3be6b7-f5df-413d-ac2d-abf1e3fd9c0b AzureADMultipleOrgs Application\r\nOfficeGraph 9b00ad24-77e1-4d53-be39-19e1bf08aa0d ba23cd2a-306c-48f2-9d62-d3ecd372dfe4 AzureADMultipleOrgs Application\r\nAzure Resource Graph adaf502e-f36c-4163-af9d-46ecf592d482 509e4652-da8d-478d-a730-e9d4a1996ca4 AzureADMultipleOrgs Application\r\nGraph Connector Service bea6ee98-3321-47f5-aff4-4f4844c68c35 56c1da01-2129-48f7-9355-af6d59d42766 AzureADMultipleOrgs Application\r\nMicrosoft Graph Connectors Core ec34f212-7d53-4c21-b153-c623b96877e6 f8f7a2aa-e116-4ba6-8aea-ca162cfa310d AzureADMultipleOrgs Application
I selected the Microsoft Graph one from above.
\nGet the permission Id of the “Sites.Selected” permission within this.
$permissionId = ((Get-MgServicePrincipal -ServicePrincipalId $servicePrincipal.Id).AppRoles | Where-Object { $_.Value -eq \"Sites.Selected\" }).Id
Here’s an example of what the permission looks like:
> (Get-MgServicePrincipal -ServicePrincipalId $servicePrincipal.Id).AppRoles | Where-Object { $_.Value -eq \"Sites.Selected\" }\r\n\r\nAllowedMemberTypes : {Application}\r\nDescription : Allow the application to access a subset of site collections without a signed in user.The specific site collections and the permissions granted will be configured in SharePoint\r\n Online.\r\nDisplayName : Access selected site collections\r\nId : 883ea226-0bf2-4a8f-9f9d-92c9162a727d\r\nIsEnabled : True\r\nOrigin : Application\r\nValue : Sites.Selected\r\nAdditionalProperties : {}
Now create a new app consent policy and add the “Sites.Selected” permission within it.
New-MgPolicyPermissionGrantPolicy `\r\n -Id \"mytenant-graph-sites-selected\" `\r\n -Description \"Permissions consentable by Application administrator (Sites)\" `\r\n -DisplayName \"Only sites.selected permission\"\r\n\r\nNew-MgPolicyPermissionGrantPolicyInclude `\r\n -PermissionGrantPolicyId \"mytenant-graph-sites-selected\" `\r\n -PermissionType \"application\" `\r\n -ResourceApplication $servicePrincipal.AppId `\r\n -Permissions $permissionId
Here’s what the policy looks like:
> Get-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId 'mytenant-graph-sites-selected' | fl *\r\n\r\nDeletedDateTime :\r\nDescription : Permissions consentable by Application administrator (Sites)\r\nDisplayName : Only sites.selected permission\r\nExcludes : {}\r\nId : mytenant-graph-sites-selected\r\nIncludes : {f934e25e-6b38-4045-b690-8e2ba42c2915}\r\nAdditionalProperties : {[@odata.context, https://graph.microsoft.com/v1.0/$metadata#policies/permissionGrantPolicies/$entity], [includes@odata.context,\r\n https://graph.microsoft.com/v1.0/$metadata#policies/permissionGrantPolicies('mytenant-graph-sites-selected')/includes], [excludes@odata.context,\r\n https://graph.microsoft.com/v1.0/$metadata#policies/permissionGrantPolicies('mytenant-graph-sites-selected')/excludes]}
Now to create a custom role that includes this app consent policy.
$params = @{\r\n\tdescription = \"Can manage selected SharePoint app registrations permissions\"\r\n\tdisplayName = \"Application administrator (SharePoint)\"\r\n\trolePermissions = @(\r\n\t\t@{\r\n\t\t\tallowedResourceActions = @(\r\n \"microsoft.directory/applications/basic/read\",\r\n \"microsoft.directory/applications/createAsOwner\",\r\n \"microsoft.directory/servicePrincipals/allProperties/read\",\r\n \"microsoft.directory/applications.myOrganization/allProperties/read\",\r\n \"microsoft.directory/applications.myOrganization/allProperties/update\",\r\n \"microsoft.directory/servicePrincipals/create\",\r\n \"microsoft.directory/servicePrincipals/managePermissionGrantsForSelf.mytenant-graph-sites-selected\",\r\n \"microsoft.directory/servicePrincipals/managePermissionGrantsForAll.mytenant-graph-sites-selected\"\r\n\t\t\t)\r\n\t\t}\r\n\t)\r\n\tisEnabled = $true\r\n}\r\n\r\nNew-MgRoleManagementDirectoryRoleDefinition -BodyParameter $params
I had to make some tweaks here compared to when I first did this in 2021. Specifically, I added these two:
\"microsoft.directory/applications.myOrganization/allProperties/read\",\r\n \"microsoft.directory/applications.myOrganization/allProperties/update\",
Just the update one would have been enough, I think. I came across this list from Microsoft’s app registrations permissions page. This other page with permissions for app consent is where I across the last two permissions.
\"microsoft.directory/servicePrincipals/managePermissionGrantsForSelf.mytenant-graph-sites-selected\",\r\n \"microsoft.directory/servicePrincipals/managePermissionGrantsForAll.mytenant-graph-sites-selected\"
This is what allows the Delegated and Application permissions to be consented, for “Sites.Selected”.
\nNow grant this to an admin from the portal.
\n\nNow Adele can login to the portal, create an app registration, add the “Sites.Selected” permission, remove the “User.Read” permission (coz that is not something we allowed in the list above), and do an admin consent (I had to refresh the page after adding the permission, for the “Grant admin consent” button to show).
\n\n\n", "content_text": "This is a continuation of an older post and an example.\nThis and this posts from Microsoft are useful references too.\nI want to delegate the ability to do admin consents to certain Graph permissions to some of my admins. In this case the “Sites.Selected” Graph API permission which typically needs a Global Admin to do the consent. To do this I have to create a custom app consent policy and a custom role that includes this app consent policy.\nFirst, connect to Graph with the following scopes.Connect-MgGraph -Scopes \"Policy.Read.PermissionGrant\",\"Policy.ReadWrite.PermissionGrant\",\"RoleManagement.ReadWrite.Directory\"Get the Microsoft Graph service principal.$servicePrincipal = Get-MgServicePrincipal -All | Where-Object { $_.DisplayName -eq \"Microsoft Graph\" }For reference, here’s an example of the output:> Get-MgServicePrincipal -All | Where-Object { $_.DisplayName -match \"Graph\" }\r\n\r\nDisplayName Id AppId SignInAudience ServicePrincipalType\r\n----------- -- ----- -------------- --------------------\r\nIDML Graph Resolver Service and CAD 2362f192-9721-4089-b2c9-6acf3e9ce553 d88a361a-d488-4271-a13f-a83df7dd99c2 AzureADMultipleOrgs Application\r\nGraph Data Connect App Registration 2a911a15-76e9-4e98-a1ee-f9f45bd6eba2 52e6d66a-9b02-477b-ad84-01c7d088f081 AzureADMyOrg Application\r\nExchange Office Graph Client for AAD - Noninteractive 2f4d6758-e00b-4037-a933-8b5224f00489 765fe668-04e7-42ba-aec0-2c96f1d8b652 AzureADMultipleOrgs Application\r\nMicrosoft Graph 327ba63b-334e-4004-bb30-20a607de4098 00000003-0000-0000-c000-000000000000 AzureADMultipleOrgs Application\r\nAzure Graph 40a924a5-b3b2-45d3-b466-cc5b2cbf9884 dbcbd02a-d7c4-42fb-8c27-b07e5118b848 AzureADMultipleOrgs Application\r\nExchange Office Graph Client for AAD - Interactive 57943d81-ce4c-4a80-ae3f-56ce03c6a8fd 6da466b6-1d13-4a2c-97bd-51a99e8d4d74 AzureADMultipleOrgs Application\r\nAudit GraphAPI Application 68420e79-2754-4dbd-9819-7049a2820601 4bfd5d66-9285-44a1-bb14-14953e8cdf5e AzureADMultipleOrgs Application\r\nMicrosoft Graph PowerShell 70a6c76d-5c6b-41ca-bb81-56d6f0360ec0 14d82eec-204b-4c2f-b7e8-296a70dab67e AzureADandPersonal\u2026 Application\r\nMicrosoft Graph Change Tracking 895b3d4b-d0b6-4102-8e01-d19ad243a7df 0bf30f3b-4a52-48df-9a82-234910c4a086 AzureADMultipleOrgs Application\r\nMicrosoft Teams Graph Service 9479fa7c-c94a-4c73-8599-92b762ac0029 ab3be6b7-f5df-413d-ac2d-abf1e3fd9c0b AzureADMultipleOrgs Application\r\nOfficeGraph 9b00ad24-77e1-4d53-be39-19e1bf08aa0d ba23cd2a-306c-48f2-9d62-d3ecd372dfe4 AzureADMultipleOrgs Application\r\nAzure Resource Graph adaf502e-f36c-4163-af9d-46ecf592d482 509e4652-da8d-478d-a730-e9d4a1996ca4 AzureADMultipleOrgs Application\r\nGraph Connector Service bea6ee98-3321-47f5-aff4-4f4844c68c35 56c1da01-2129-48f7-9355-af6d59d42766 AzureADMultipleOrgs Application\r\nMicrosoft Graph Connectors Core ec34f212-7d53-4c21-b153-c623b96877e6 f8f7a2aa-e116-4ba6-8aea-ca162cfa310d AzureADMultipleOrgs ApplicationI selected the Microsoft Graph one from above.\nGet the permission Id of the “Sites.Selected” permission within this.$permissionId = ((Get-MgServicePrincipal -ServicePrincipalId $servicePrincipal.Id).AppRoles | Where-Object { $_.Value -eq \"Sites.Selected\" }).IdHere’s an example of what the permission looks like:> (Get-MgServicePrincipal -ServicePrincipalId $servicePrincipal.Id).AppRoles | Where-Object { $_.Value -eq \"Sites.Selected\" }\r\n\r\nAllowedMemberTypes : {Application}\r\nDescription : Allow the application to access a subset of site collections without a signed in user.The specific site collections and the permissions granted will be configured in SharePoint\r\n Online.\r\nDisplayName : Access selected site collections\r\nId : 883ea226-0bf2-4a8f-9f9d-92c9162a727d\r\nIsEnabled : True\r\nOrigin : Application\r\nValue : Sites.Selected\r\nAdditionalProperties : {}Now create a new app consent policy and add the “Sites.Selected” permission within it.New-MgPolicyPermissionGrantPolicy `\r\n -Id \"mytenant-graph-sites-selected\" `\r\n -Description \"Permissions consentable by Application administrator (Sites)\" `\r\n -DisplayName \"Only sites.selected permission\"\r\n\r\nNew-MgPolicyPermissionGrantPolicyInclude `\r\n -PermissionGrantPolicyId \"mytenant-graph-sites-selected\" `\r\n -PermissionType \"application\" `\r\n -ResourceApplication $servicePrincipal.AppId `\r\n -Permissions $permissionIdHere’s what the policy looks like:> Get-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId 'mytenant-graph-sites-selected' | fl *\r\n\r\nDeletedDateTime :\r\nDescription : Permissions consentable by Application administrator (Sites)\r\nDisplayName : Only sites.selected permission\r\nExcludes : {}\r\nId : mytenant-graph-sites-selected\r\nIncludes : {f934e25e-6b38-4045-b690-8e2ba42c2915}\r\nAdditionalProperties : {[@odata.context, https://graph.microsoft.com/v1.0/$metadata#policies/permissionGrantPolicies/$entity], [includes@odata.context,\r\n https://graph.microsoft.com/v1.0/$metadata#policies/permissionGrantPolicies('mytenant-graph-sites-selected')/includes], [excludes@odata.context,\r\n https://graph.microsoft.com/v1.0/$metadata#policies/permissionGrantPolicies('mytenant-graph-sites-selected')/excludes]}Now to create a custom role that includes this app consent policy.$params = @{\r\n\tdescription = \"Can manage selected SharePoint app registrations permissions\"\r\n\tdisplayName = \"Application administrator (SharePoint)\"\r\n\trolePermissions = @(\r\n\t\t@{\r\n\t\t\tallowedResourceActions = @(\r\n \"microsoft.directory/applications/basic/read\",\r\n \"microsoft.directory/applications/createAsOwner\",\r\n \"microsoft.directory/servicePrincipals/allProperties/read\",\r\n \"microsoft.directory/applications.myOrganization/allProperties/read\",\r\n \"microsoft.directory/applications.myOrganization/allProperties/update\",\r\n \"microsoft.directory/servicePrincipals/create\",\r\n \"microsoft.directory/servicePrincipals/managePermissionGrantsForSelf.mytenant-graph-sites-selected\",\r\n \"microsoft.directory/servicePrincipals/managePermissionGrantsForAll.mytenant-graph-sites-selected\"\r\n\t\t\t)\r\n\t\t}\r\n\t)\r\n\tisEnabled = $true\r\n}\r\n\r\nNew-MgRoleManagementDirectoryRoleDefinition -BodyParameter $paramsI had to make some tweaks here compared to when I first did this in 2021. Specifically, I added these two:\"microsoft.directory/applications.myOrganization/allProperties/read\",\r\n \"microsoft.directory/applications.myOrganization/allProperties/update\",Just the update one would have been enough, I think. I came across this list from Microsoft’s app registrations permissions page. This other page with permissions for app consent is where I across the last two permissions.\"microsoft.directory/servicePrincipals/managePermissionGrantsForSelf.mytenant-graph-sites-selected\",\r\n \"microsoft.directory/servicePrincipals/managePermissionGrantsForAll.mytenant-graph-sites-selected\"This is what allows the Delegated and Application permissions to be consented, for “Sites.Selected”.\nNow grant this to an admin from the portal.\n\nNow Adele can login to the portal, create an app registration, add the “Sites.Selected” permission, remove the “User.Read” permission (coz that is not something we allowed in the list above), and do an admin consent (I had to refresh the page after adding the permission, for the “Grant admin consent” button to show).\n\n ", "date_published": "2024-01-02T16:59:39+00:00", "date_modified": "2024-01-02T16:59:39+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "admin consent", "app registrations", "mggraph", "microsoft graph", "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7455", "url": "https://rakhesh.com/azure/runbook-type-17-not-supported/", "title": "Runbook type \u201917\u2019 not supported. (or: Runbooks stuck in a Queued state)", "content_html": "
Was stuck with an irritating problem today. Installed PowerShell 7.4 on one of my Hybrid Runbook Workers (HRW) but the Runbooks refuse to run using it! Looking at the logs I see entries like this:
Orchestrator.Sandbox.Diagnostics Critical: 0 : [2023-12-20T15:05:20.4533023Z] Starting sandbox process. [sandboxId=ecdbfee3-902a-44c8-b936-d51c8180190c]\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-12-20T15:05:22.4845620Z] Hybrid Sandbox\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-12-20T15:05:23.1564394Z] First Trace Log.\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-12-20T15:05:23.4220674Z] Sandbox Recieving Job. [sandboxId=ecdbfee3-902a-44c8-b936-d51c8180190c][jobId=0dfa6479-ffca-4f40-a54e-ee353f57734a]\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-12-20T15:05:23.6251949Z] An unhandled exception was encountered while handling the job action. The sandbox will terminate immediately. [jobId=0dfa6479-ffca-4f40-a54e-ee353f57734a][source=Maintainer][exceptionMessage=System.InvalidOperationException: Runbook type '17' not supported.\r\n at Orchestrator.Runtime.Account.CreateRunbook(RunbookKey runbookKey, RunbookData runbookData) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Runtime\\Account.cs:line 336\r\n at Orchestrator.Runtime.Account.DefaultLoadRunbook(CompositeKey`2 compositeKey) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Runtime\\Account.cs:line 233\r\n at Orchestrator.Shared.Shared.CacheWithLocking`2.LoadAndAddValue(TKey key) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\CacheWithLocking.cs:line 609\r\n at Orchestrator.Shared.Shared.CacheWithLocking`2.GetEntry(TKey key) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\CacheWithLocking.cs:line 582\r\n at Orchestrator.Shared.Shared.CacheWithLocking`2.ActWithGuardedValue[TResult](TKey key, Func`2 action) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\CacheWithLocking.cs:line 352\r\n at Orchestrator.Sandbox.SandboxJobActionHandler.HandleJobActionWithGuardedAccount(Account account, JobData jobData, JobMessageSource source) in C:\\__w\\1\\s\\src\\prod\\Orchestrator.Sandbox\\SandboxJobActionHandler.cs:line 605\r\n at Orchestrator.Shared.Shared.CacheWithLocking`2.<>c__DisplayClass30_0.<ActWithGuardedValue>b__0(TValue value) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\CacheWithLocking.cs:line 319\r\n at Orchestrator.Shared.Shared.RefCounted`1.UsingValue[TResult](Func`2 action) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\RefCounted.cs:line 153\r\n at Orchestrator.Shared.Shared.CacheWithLocking`2.ActWithGuardedValue[TResult](TKey key, Func`2 action) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\CacheWithLocking.cs:line 352\r\n at Orchestrator.Sandbox.SandboxJobActionHandler.HandleJobActionSync(JobData jobData, JobMessageSource source) in C:\\__w\\1\\s\\src\\prod\\Orchestrator.Sandbox\\SandboxJobActionHandler.cs:line 329\r\n at Orchestrator.Sandbox.SandboxJobActionHandler.HandleJobActionThread(JobData jobData, JobMessageSource source) in C:\\__w\\1\\s\\src\\prod\\Orchestrator.Sandbox\\SandboxJobActionHandler.cs:line 680][sandboxId=ecdbfee3-902a-44c8-b936-d51c8180190c]
Tried rebooting, updating the OS, downgrading it to PowerShell 7.2 just in case… nothing helped. Double checked the steps in the docs (there isn’t much) and all looked fine. But the Runbooks just wouldn’t run. They’d stay “Queued” with lot of logs like above being generated, and eventually move to a “Suspended” state.
\nI had two automation accounts, one of them linked to two HRWs; another to one HRW. Both had the same issue.
\nLuckily I also had another automation account + HRW where things worked, so I compared the two and noticed that under the VM in Azure > “Extensions + applications” there’s a difference:
\n\nVersion was 1.1.12.
\n\nWhile on the problem HRW it was:
\n\nOlder version. Same on all three non-working HRWs! There were some of my older HRWs so I must have been on a preview version or something which never updated.
\nSo I enabled automatic upgrade. Then waited… but that didn’t update anything. Googled, and came across this doc. Looks like the updates only happen for minor versions, while I need to jump from 0.x to 1.x.
\nBtw, good instructions on how to enable/ disable automatic upgrades when needed.
$extensionType = \"HybridWorkerForLinux/HybridWorkerForWindows\"\r\n$extensionName = \"HybridWorkerExtension\"\r\n$publisher = \"Microsoft.Azure.Automation.HybridWorker\"\r\nSet-AzVMExtension -ResourceGroupName <RGName> -Location <Location> -VMName <vmName> -Name $extensionName -Publisher $publisher -ExtensionType $extensionType -TypeHandlerVersion 1.1 -Settings $settings -EnableAutomaticUpgrade $true/$false
I learnt how to do a manual upgrade:
Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name \"HybridWorkerExtension\" -Publisher \"Microsoft.Azure.Automation.HybridWorker\" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -EnableAutomaticUpgrade $true/$false\n
That worked!
\n\nAnd that did the trick! Now the Runbooks are no longer “Queued”. :)
\n", "content_text": "Was stuck with an irritating problem today. Installed PowerShell 7.4 on one of my Hybrid Runbook Workers (HRW) but the Runbooks refuse to run using it! Looking at the logs I see entries like this:Orchestrator.Sandbox.Diagnostics Critical: 0 : [2023-12-20T15:05:20.4533023Z] Starting sandbox process. [sandboxId=ecdbfee3-902a-44c8-b936-d51c8180190c]\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-12-20T15:05:22.4845620Z] Hybrid Sandbox\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-12-20T15:05:23.1564394Z] First Trace Log.\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-12-20T15:05:23.4220674Z] Sandbox Recieving Job. [sandboxId=ecdbfee3-902a-44c8-b936-d51c8180190c][jobId=0dfa6479-ffca-4f40-a54e-ee353f57734a]\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-12-20T15:05:23.6251949Z] An unhandled exception was encountered while handling the job action. The sandbox will terminate immediately. [jobId=0dfa6479-ffca-4f40-a54e-ee353f57734a][source=Maintainer][exceptionMessage=System.InvalidOperationException: Runbook type '17' not supported.\r\n at Orchestrator.Runtime.Account.CreateRunbook(RunbookKey runbookKey, RunbookData runbookData) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Runtime\\Account.cs:line 336\r\n at Orchestrator.Runtime.Account.DefaultLoadRunbook(CompositeKey`2 compositeKey) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Runtime\\Account.cs:line 233\r\n at Orchestrator.Shared.Shared.CacheWithLocking`2.LoadAndAddValue(TKey key) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\CacheWithLocking.cs:line 609\r\n at Orchestrator.Shared.Shared.CacheWithLocking`2.GetEntry(TKey key) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\CacheWithLocking.cs:line 582\r\n at Orchestrator.Shared.Shared.CacheWithLocking`2.ActWithGuardedValue[TResult](TKey key, Func`2 action) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\CacheWithLocking.cs:line 352\r\n at Orchestrator.Sandbox.SandboxJobActionHandler.HandleJobActionWithGuardedAccount(Account account, JobData jobData, JobMessageSource source) in C:\\__w\\1\\s\\src\\prod\\Orchestrator.Sandbox\\SandboxJobActionHandler.cs:line 605\r\n at Orchestrator.Shared.Shared.CacheWithLocking`2.<>c__DisplayClass30_0.<ActWithGuardedValue>b__0(TValue value) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\CacheWithLocking.cs:line 319\r\n at Orchestrator.Shared.Shared.RefCounted`1.UsingValue[TResult](Func`2 action) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\RefCounted.cs:line 153\r\n at Orchestrator.Shared.Shared.CacheWithLocking`2.ActWithGuardedValue[TResult](TKey key, Func`2 action) in C:\\__w\\1\\s\\src\\Shared\\Orchestrator.Shared\\Shared\\CacheWithLocking.cs:line 352\r\n at Orchestrator.Sandbox.SandboxJobActionHandler.HandleJobActionSync(JobData jobData, JobMessageSource source) in C:\\__w\\1\\s\\src\\prod\\Orchestrator.Sandbox\\SandboxJobActionHandler.cs:line 329\r\n at Orchestrator.Sandbox.SandboxJobActionHandler.HandleJobActionThread(JobData jobData, JobMessageSource source) in C:\\__w\\1\\s\\src\\prod\\Orchestrator.Sandbox\\SandboxJobActionHandler.cs:line 680][sandboxId=ecdbfee3-902a-44c8-b936-d51c8180190c]Tried rebooting, updating the OS, downgrading it to PowerShell 7.2 just in case… nothing helped. Double checked the steps in the docs (there isn’t much) and all looked fine. But the Runbooks just wouldn’t run. They’d stay “Queued” with lot of logs like above being generated, and eventually move to a “Suspended” state.\nI had two automation accounts, one of them linked to two HRWs; another to one HRW. Both had the same issue.\nLuckily I also had another automation account + HRW where things worked, so I compared the two and noticed that under the VM in Azure > “Extensions + applications” there’s a difference:\n\nVersion was 1.1.12.\n\nWhile on the problem HRW it was:\n\nOlder version. Same on all three non-working HRWs! There were some of my older HRWs so I must have been on a preview version or something which never updated.\nSo I enabled automatic upgrade. Then waited… but that didn’t update anything. Googled, and came across this doc. Looks like the updates only happen for minor versions, while I need to jump from 0.x to 1.x.\nBtw, good instructions on how to enable/ disable automatic upgrades when needed.$extensionType = \"HybridWorkerForLinux/HybridWorkerForWindows\"\r\n$extensionName = \"HybridWorkerExtension\"\r\n$publisher = \"Microsoft.Azure.Automation.HybridWorker\"\r\nSet-AzVMExtension -ResourceGroupName <RGName> -Location <Location> -VMName <vmName> -Name $extensionName -Publisher $publisher -ExtensionType $extensionType -TypeHandlerVersion 1.1 -Settings $settings -EnableAutomaticUpgrade $true/$falseI learnt how to do a manual upgrade:Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name \"HybridWorkerExtension\" -Publisher \"Microsoft.Azure.Automation.HybridWorker\" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -EnableAutomaticUpgrade $true/$false\nThat worked!\n\nAnd that did the trick! Now the Runbooks are no longer “Queued”. :)", "date_published": "2023-12-20T19:10:09+00:00", "date_modified": "2024-01-20T09:40:59+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "automation", "Azure, Azure AD, Graph, M365", "powershell" ] }, { "id": "https://rakhesh.com/?p=7447", "url": "https://rakhesh.com/exchange/getting-connection-information/", "title": "Getting Connection Information", "content_html": "Probably of use to others, I dunno, but one thing I always want to know is what all Graph, Exchange Online, or PnP PowerShell my current PowerShell session is connected to. Coz if it’s already connected I don’t need to fuss about and reconnect.
\nI guess there’s some way of adding it to fancy prompt engines like oh-my-posh, but all I wanted was a cmdlet I could run to view this. So I wrote up:
function Get-MyConnectionInformation {\r\n $snippetsHash = @{\r\n \"Graph\" = @{\r\n \"Status\" = \"\"\r\n \"Snippet\" = \"\"\r\n \"CombinedSnippet\" = \"\"\r\n }\r\n \"Exchange Online\" = @{\r\n \"Status\" = \"\"\r\n \"Snippet\" = \"\"\r\n \"CombinedSnippet\" = \"\"\r\n }\r\n \"PnP PowerShell\" = @{\r\n \"Status\" = \"\"\r\n \"Snippet\" = \"\"\r\n \"CombinedSnippet\" = \"\"\r\n }\r\n }\r\n\r\n if (Get-MgContext) {\r\n $snippetsHash.Graph.Snippet = \"Connected to $((Get-MgContext).AppName)\"\r\n $snippetsHash.Graph.Status = $true\r\n } else {\r\n $snippetsHash.Graph.Snippet = \"Disconnected\"\r\n $snippetsHash.Graph.Status = $false\r\n }\r\n\r\n if (Get-ConnectionInformation) {\r\n $snippetsHash.\"Exchange Online\".Snippet = \"Connected to $((Get-ConnectionInformation).Organization)\"\r\n $snippetsHash.\"Exchange Online\".Status = $true\r\n } else {\r\n $snippetsHash.\"Exchange Online\".Snippet = \"Disconnected\"\r\n $snippetsHash.\"Exchange Online\".Status = $false\r\n }\r\n\r\n try {\r\n $pnpTemp = Get-PnpConnection -ErrorAction Stop\r\n } catch {}\r\n \r\n if ($pnpTemp) {\r\n $snippetsHash.\"PnP PowerShell\".Snippet = \"Connected to $($pnpTemp.Url)\"\r\n $snippetsHash.\"PnP PowerShell\".Status = $true\r\n } else {\r\n $snippetsHash.\"PnP PowerShell\".Snippet = \"Disconnected\"\r\n $snippetsHash.\"PnP PowerShell\".Status = $false\r\n }\r\n\r\n # An array holding all the snippets\r\n $allSnippets = @()\r\n # For each entity create the combined snippet (so I can find the one with the longest length) and also add it to the above array\r\n foreach ($key in $snippetsHash.Keys) {\r\n $snippetsHash.$key.CombinedSnippet = $key + \"|\" + $snippetsHash.$key.\"Snippet\"\r\n $allSnippets += $snippetsHash.$key.CombinedSnippet\r\n }\r\n\r\n # Find the longest snippet\r\n $longestSnippet = $allSnippets | Sort-Object -Property Length -Descending | Select-Object -First 1\r\n\r\n # For each entity now do the actual outputting, taking into account the longest one and adding dots in between\r\n foreach ($key in $snippetsHash.Keys) {\r\n $snippet = $snippetsHash.$key.\"CombinedSnippet\"\r\n\r\n if ($snippet -eq $longestSnippet) {\r\n $numDots = 5\r\n } else {\r\n $numDots = $longestSnippet.Length - $snippet.Length + 5\r\n }\r\n\r\n # Parts of the final output\r\n $snippet1 = $key\r\n $snippet2 = $snippetsHash.$key.Snippet\r\n \r\n Write-Host -NoNewline $snippet1\r\n $counter = 0\r\n do {\r\n Write-Host -NoNewline \".\"\r\n $counter ++\r\n } while ($counter -lt $numDots)\r\n\r\n if ($snippetsHash.$key.Status) {\r\n Write-Host -ForegroundColor Green $snippet2\r\n } else {\r\n Write-Host -ForegroundColor Red $snippet2\r\n }\r\n }\r\n}
It’s more complicated than it needs to be, but that’s because I wanted to get output like this:
\n\nOr:
\n\nNeat, eh!
\n", "content_text": "Probably of use to others, I dunno, but one thing I always want to know is what all Graph, Exchange Online, or PnP PowerShell my current PowerShell session is connected to. Coz if it’s already connected I don’t need to fuss about and reconnect.\nI guess there’s some way of adding it to fancy prompt engines like oh-my-posh, but all I wanted was a cmdlet I could run to view this. So I wrote up:function Get-MyConnectionInformation {\r\n $snippetsHash = @{\r\n \"Graph\" = @{\r\n \"Status\" = \"\"\r\n \"Snippet\" = \"\"\r\n \"CombinedSnippet\" = \"\"\r\n }\r\n \"Exchange Online\" = @{\r\n \"Status\" = \"\"\r\n \"Snippet\" = \"\"\r\n \"CombinedSnippet\" = \"\"\r\n }\r\n \"PnP PowerShell\" = @{\r\n \"Status\" = \"\"\r\n \"Snippet\" = \"\"\r\n \"CombinedSnippet\" = \"\"\r\n }\r\n }\r\n\r\n if (Get-MgContext) {\r\n $snippetsHash.Graph.Snippet = \"Connected to $((Get-MgContext).AppName)\"\r\n $snippetsHash.Graph.Status = $true\r\n } else {\r\n $snippetsHash.Graph.Snippet = \"Disconnected\"\r\n $snippetsHash.Graph.Status = $false\r\n }\r\n\r\n if (Get-ConnectionInformation) {\r\n $snippetsHash.\"Exchange Online\".Snippet = \"Connected to $((Get-ConnectionInformation).Organization)\"\r\n $snippetsHash.\"Exchange Online\".Status = $true\r\n } else {\r\n $snippetsHash.\"Exchange Online\".Snippet = \"Disconnected\"\r\n $snippetsHash.\"Exchange Online\".Status = $false\r\n }\r\n\r\n try {\r\n $pnpTemp = Get-PnpConnection -ErrorAction Stop\r\n } catch {}\r\n \r\n if ($pnpTemp) {\r\n $snippetsHash.\"PnP PowerShell\".Snippet = \"Connected to $($pnpTemp.Url)\"\r\n $snippetsHash.\"PnP PowerShell\".Status = $true\r\n } else {\r\n $snippetsHash.\"PnP PowerShell\".Snippet = \"Disconnected\"\r\n $snippetsHash.\"PnP PowerShell\".Status = $false\r\n }\r\n\r\n # An array holding all the snippets\r\n $allSnippets = @()\r\n # For each entity create the combined snippet (so I can find the one with the longest length) and also add it to the above array\r\n foreach ($key in $snippetsHash.Keys) {\r\n $snippetsHash.$key.CombinedSnippet = $key + \"|\" + $snippetsHash.$key.\"Snippet\"\r\n $allSnippets += $snippetsHash.$key.CombinedSnippet\r\n }\r\n\r\n # Find the longest snippet\r\n $longestSnippet = $allSnippets | Sort-Object -Property Length -Descending | Select-Object -First 1\r\n\r\n # For each entity now do the actual outputting, taking into account the longest one and adding dots in between\r\n foreach ($key in $snippetsHash.Keys) {\r\n $snippet = $snippetsHash.$key.\"CombinedSnippet\"\r\n\r\n if ($snippet -eq $longestSnippet) {\r\n $numDots = 5\r\n } else {\r\n $numDots = $longestSnippet.Length - $snippet.Length + 5\r\n }\r\n\r\n # Parts of the final output\r\n $snippet1 = $key\r\n $snippet2 = $snippetsHash.$key.Snippet\r\n \r\n Write-Host -NoNewline $snippet1\r\n $counter = 0\r\n do {\r\n Write-Host -NoNewline \".\"\r\n $counter ++\r\n } while ($counter -lt $numDots)\r\n\r\n if ($snippetsHash.$key.Status) {\r\n Write-Host -ForegroundColor Green $snippet2\r\n } else {\r\n Write-Host -ForegroundColor Red $snippet2\r\n }\r\n }\r\n}It’s more complicated than it needs to be, but that’s because I wanted to get output like this:\n\nOr:\n\nNeat, eh!", "date_published": "2023-12-19T11:13:14+00:00", "date_modified": "2023-12-19T11:17:41+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "powershell", "Azure, Azure AD, Graph, M365", "Exchange, Exchange Online" ] }, { "id": "https://rakhesh.com/?p=7445", "url": "https://rakhesh.com/exchange/there-are-multiple-recipients-matching-the-identity/", "title": "There are multiple recipients matching the identity\u2026", "content_html": "I was trying to add someone to an Exchange Online distribution group and got the following error:
Ex8155FB|Microsoft.Exchange.Configuration.Tasks.ManagementObjectAmbiguousException|There are multiple recipients matching the identity "abc@def.com". Please specify a unique value.
The solution for this is to do a search like this:
\nGet-EXORecipient -ResultSize unlimited | Where-Object { $_.EmailAddresses -match \"<part of the address>\" } | Format-List Alias,Name,DisplayName,@{Name=\"EmailAddresses\";Expression={$_.EmailAddresses -join ';'}}
Id
property to add the user.\n(Am on a roll this week! After a long dry spell this is my 4th post in 3 days / 5th post this month…)
\nSince earlier this week whenever I’d launch VSCode or VSCodium (yes I started using the latter now) I’d always get a message saying its unable to resolve my shell environment. Like here.
\nI tracked it down to the fact that earlier this week I had started launching tmux
by default on my desktops. Previously I would only launch it when SSHing into a machine but I figured it’s good to have it always launch on my desktops so I can easily connect to them from elsewhere. This was obviously interfering with VSCode. Interestingly, I had added the tmux
launching code in .bash_profile
so as to limit it to login sessions, but I guess VSCode launches bash as a login shell.
What to do here? Inspired by this GitHub issue I added the following line to my .bash_profile
and disabled the bit that launches tmux
.
env > ~/Downloads/blah.txt
This dumps the environment variables that are present when the shell is launched by VSCode. Here I found a variable VSCODE_RESOLVING_ENVIRONMENT
that was set to 1. Boom, that’s my entry!
Before I go further, here’s how I currently launch tmux
.
# Machines where I always want to launch tmux\r\ntmux_by_default=(\r\n \"machine1\"\r\n \"machine2\" \r\n)\r\n\r\n# A function to join the array\r\n# https://stackoverflow.com/questions/1527049/how-can-i-join-elements-of-a-bash-array-into-a-delimited-string\r\nfunction join_by { local IFS=\"$1\"; shift; echo \"$*\"; }\r\n# So join_by \"|\" \"${tmux_by_default[@]}\" will create \"machine1|machine2\" which I can then regex over :)\r\n\r\n\r\nif [[ $(hostname) =~ $(join_by \"|\" \"${tmux_by_default[@]}\") ]] || [[ ! -z \"$SSH_TTY\" ]]; then\r\n # Check if tmux is installed, else warn that\r\n if command -v tmux &> /dev/null; then\r\n\r\n # Check if I am in a tmux session already. If I am not in one then $TMUX is empty. \r\n if [[ -z \"$TMUX\" ]]; then\r\n if tmux ls | grep -qv attached; then\r\n # attach to the existing session and when it exits if the exit code is 0 then exit the shell (C-b d in tmux has exit code 0)\r\n exec tmux at && exit\r\n else\r\n # create a new session and when it exits if the exit code is 0 then exit the shell (C-b d in tmux has exit code 0)\r\n exec tmux new-session && exit\r\n fi\r\n fi\r\n\r\n else\r\n echo -e \"tmux missing!\"\r\n fi\r\nfi
For certain machines it always launches, for the rest only if I am connecting via SSH. So I made one change:
# Machines where I always want to launch tmux\r\ntmux_by_default=(\r\n \"machine1\"\r\n \"machine2\" \r\n)\r\n\r\n# A function to join the array\r\n# https://stackoverflow.com/questions/1527049/how-can-i-join-elements-of-a-bash-array-into-a-delimited-string\r\nfunction join_by { local IFS=\"$1\"; shift; echo \"$*\"; }\r\n# So join_by \"|\" \"${tmux_by_default[@]}\" will create \"machine1|machine2\" which I can then regex over :)\r\n\r\n\r\nif ([[ $(hostname) =~ $(join_by \"|\" \"${tmux_by_default[@]}\") ]] && [[ -z \"$VSCODE_RESOLVING_ENVIRONMENT\" ]]) || ([[ ! -z \"$SSH_TTY\" ]]); then\r\n # Check if tmux is installed, else warn that\r\n if command -v tmux &> /dev/null; then\r\n\r\n # Check if I am in a tmux session already. If I am not in one then $TMUX is empty. \r\n if [[ -z \"$TMUX\" ]]; then\r\n if tmux ls | grep -qv attached; then\r\n # attach to the existing session and when it exits if the exit code is 0 then exit the shell (C-b d in tmux has exit code 0)\r\n exec tmux at && exit\r\n else\r\n # create a new session and when it exits if the exit code is 0 then exit the shell (C-b d in tmux has exit code 0)\r\n exec tmux new-session && exit\r\n fi\r\n fi\r\n\r\n else\r\n echo -e \"tmux missing!\"\r\n fi\r\nfi
Now the section that always launches it on certain machines also checks if the VSCODE_RESOLVING_ENVIRONMENT
variable is empty and only then launches.
Sweet!
\nps. I knew about VSCodium but didn’t really look into switching to it. Then I came across this blog post and that resonated with me so I started making the switch. I still have VSCode around coz a couple of the extensions I use only work with it, but I primarily use VSCodium otherwise. I recommend others too give it a shot. I also recommend the Night Owl theme, it’s available on both.
\n", "content_text": "(Am on a roll this week! After a long dry spell this is my 4th post in 3 days / 5th post this month…)\nSince earlier this week whenever I’d launch VSCode or VSCodium (yes I started using the latter now) I’d always get a message saying its unable to resolve my shell environment. Like here.\nI tracked it down to the fact that earlier this week I had started launching tmux by default on my desktops. Previously I would only launch it when SSHing into a machine but I figured it’s good to have it always launch on my desktops so I can easily connect to them from elsewhere. This was obviously interfering with VSCode. Interestingly, I had added the tmux launching code in .bash_profile so as to limit it to login sessions, but I guess VSCode launches bash as a login shell.\nWhat to do here? Inspired by this GitHub issue I added the following line to my .bash_profile and disabled the bit that launches tmux.env > ~/Downloads/blah.txtThis dumps the environment variables that are present when the shell is launched by VSCode. Here I found a variable VSCODE_RESOLVING_ENVIRONMENT that was set to 1. Boom, that’s my entry!\nBefore I go further, here’s how I currently launch tmux.# Machines where I always want to launch tmux\r\ntmux_by_default=(\r\n \"machine1\"\r\n \"machine2\" \r\n)\r\n\r\n# A function to join the array\r\n# https://stackoverflow.com/questions/1527049/how-can-i-join-elements-of-a-bash-array-into-a-delimited-string\r\nfunction join_by { local IFS=\"$1\"; shift; echo \"$*\"; }\r\n# So join_by \"|\" \"${tmux_by_default[@]}\" will create \"machine1|machine2\" which I can then regex over :)\r\n\r\n\r\nif [[ $(hostname) =~ $(join_by \"|\" \"${tmux_by_default[@]}\") ]] || [[ ! -z \"$SSH_TTY\" ]]; then\r\n # Check if tmux is installed, else warn that\r\n if command -v tmux &> /dev/null; then\r\n\r\n # Check if I am in a tmux session already. If I am not in one then $TMUX is empty. \r\n if [[ -z \"$TMUX\" ]]; then\r\n if tmux ls | grep -qv attached; then\r\n # attach to the existing session and when it exits if the exit code is 0 then exit the shell (C-b d in tmux has exit code 0)\r\n exec tmux at && exit\r\n else\r\n # create a new session and when it exits if the exit code is 0 then exit the shell (C-b d in tmux has exit code 0)\r\n exec tmux new-session && exit\r\n fi\r\n fi\r\n\r\n else\r\n echo -e \"tmux missing!\"\r\n fi\r\nfiFor certain machines it always launches, for the rest only if I am connecting via SSH. So I made one change:# Machines where I always want to launch tmux\r\ntmux_by_default=(\r\n \"machine1\"\r\n \"machine2\" \r\n)\r\n\r\n# A function to join the array\r\n# https://stackoverflow.com/questions/1527049/how-can-i-join-elements-of-a-bash-array-into-a-delimited-string\r\nfunction join_by { local IFS=\"$1\"; shift; echo \"$*\"; }\r\n# So join_by \"|\" \"${tmux_by_default[@]}\" will create \"machine1|machine2\" which I can then regex over :)\r\n\r\n\r\nif ([[ $(hostname) =~ $(join_by \"|\" \"${tmux_by_default[@]}\") ]] && [[ -z \"$VSCODE_RESOLVING_ENVIRONMENT\" ]]) || ([[ ! -z \"$SSH_TTY\" ]]); then\r\n # Check if tmux is installed, else warn that\r\n if command -v tmux &> /dev/null; then\r\n\r\n # Check if I am in a tmux session already. If I am not in one then $TMUX is empty. \r\n if [[ -z \"$TMUX\" ]]; then\r\n if tmux ls | grep -qv attached; then\r\n # attach to the existing session and when it exits if the exit code is 0 then exit the shell (C-b d in tmux has exit code 0)\r\n exec tmux at && exit\r\n else\r\n # create a new session and when it exits if the exit code is 0 then exit the shell (C-b d in tmux has exit code 0)\r\n exec tmux new-session && exit\r\n fi\r\n fi\r\n\r\n else\r\n echo -e \"tmux missing!\"\r\n fi\r\nfiNow the section that always launches it on certain machines also checks if the VSCODE_RESOLVING_ENVIRONMENT variable is empty and only then launches.\nSweet!\nps. I knew about VSCodium but didn’t really look into switching to it. Then I came across this blog post and that resonated with me so I started making the switch. I still have VSCode around coz a couple of the extensions I use only work with it, but I primarily use VSCodium otherwise. I recommend others too give it a shot. I also recommend the Night Owl theme, it’s available on both.", "date_published": "2023-12-13T20:08:30+00:00", "date_modified": "2023-12-13T20:08:30+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "tmux", "vscode", "Linux & BSD", "Mac" ] }, { "id": "https://rakhesh.com/?p=7436", "url": "https://rakhesh.com/aside/muting-edge-tabs-by-default/", "title": "Muting Edge tabs by default", "content_html": "Maybe it’s a default setting now, but it wasn’t in my case, and I was getting super bugged by Edge sound notifications. I don’t actually use Edge much (Firefox being my preferred browser) but I have it open for some work stuff and any time some raises a ticket or replies to a ticket and if I have our ticketing system open it makes a ding sound. Of course I can mute the tab via Ctrl+M
but sometimes I forget to and when I logout for the day I can still hear the browser making these ding sounds. Aargh.
Turns out there’s a hidden way of muting by default. Pop in this URL in Edge: edge://flags/#edge-sound-content-setting
Flip it from “Default” to “Enabled”, as in the screenshot. And then in Settings > Cookies and Site Permissions > you will now see a “Sound” section.
\n\nYou can set it to be blocked by default and make exceptions, or enabled by default and block specific sites.
\n\nThanks to this forum post where I discovered this.
\n", "content_text": "Maybe it’s a default setting now, but it wasn’t in my case, and I was getting super bugged by Edge sound notifications. I don’t actually use Edge much (Firefox being my preferred browser) but I have it open for some work stuff and any time some raises a ticket or replies to a ticket and if I have our ticketing system open it makes a ding sound. Of course I can mute the tab via Ctrl+M but sometimes I forget to and when I logout for the day I can still hear the browser making these ding sounds. Aargh.\nTurns out there’s a hidden way of muting by default. Pop in this URL in Edge: edge://flags/#edge-sound-content-setting\n\nFlip it from “Default” to “Enabled”, as in the screenshot. And then in Settings > Cookies and Site Permissions > you will now see a “Sound” section.\n\nYou can set it to be blocked by default and make exceptions, or enabled by default and block specific sites.\n\nThanks to this forum post where I discovered this.", "date_published": "2023-12-13T11:09:10+00:00", "date_modified": "2023-12-13T11:09:10+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "Edge", "Asides" ] }, { "id": "https://rakhesh.com/?p=7432", "url": "https://rakhesh.com/linux-bsd/tailscale-app-connectors/", "title": "Tailscale app connectors", "content_html": "Ooh, woke up to a blog post from Tailscale about an interesting new feature called app connectors. I am very excited for it!
\nBasically you can setup one or more nodes in your tailnet as the exit node for traffic to specific domains. So, for example, say I want to watch Netflix but have it think I am visiting from the US, and I have a tailnet node in the US, I can just set all the Netflix domains (netflix.com, *.netflix.com, and whatever else is needed) to route via this node in the US. I don’t have to set it as my exit node and have everything go via that, I can have just the specific traffic go via that from all my nodes. So neat!
\nOf course, Netflix is just a contrived example, but as an IT person this is super useful in other scenarios. For instance, I have my test Entra ID tenant (previously known as Azure AD). Using Conditional Access policies I want to lock down my admin account to specific IPs – like my home public IP for instance. I can do that, but when I am out in a cafe or something I’d then have to use some node in my home as the exit node so the login traffic appears as if from the home public IP, and I don’t really want to do that. But what I can do now though is assign one or more of my nodes at home as the app connector for login.microsoftonline.com (and other domains too I guess) and then all traffic for logging in to Entra ID goes via that node… for all my nodes. It doesn’t matter if I am at home or outside, since all my machines have Tailscale installed by default the traffic for just these domains will automagically go via my home node. So awesome!
\nWhat’s more, I have one node at home which also has WireGuard installed on it and I use it along with Tailscale. WireGuard connectors a VPN provider and whenever I want to use this VPN provider from any of my Tailnet machines I would just use this node as the exit node. I can still do that, but now I can also take things one step granular. Say there’s a specific site I always want to visit via this VPN. As of now I’d have to always use this WireGuard node as my exit node just to visit that site, forcing all my traffic to go via that exit node, but now I can just create an app connector to these domains on this particular node and any traffic to these domains from any of my Tailnet machines will go via this WireGuard connected node, thus having a sort of VPN connection just for this domain. :)
\nNice!
\nBefore I end, one really neat thing about Tailscale’s user friendliness. I had noticed this in the past but get a chance to post it. One of the steps when doing a lot of Tailscale related activities is the run the tailscale up
command again with some switches. Problem is, you might already run the switch in the past with a different set of switches, so you can’t simply run it again with only the new switches. You must specify the previous and new switches. But a lot of times you might have forgotten what switches you used in the past, or not sure whether you even sed any switches. How the Tailscale CLI handles this is beautiful, because when I enter a command like this for instance:
sudo tailscale up --advertise-connector --advertise-tags=tag:connector
It does not just error out or tell me to add the previous switches too, it actually gives a helpful error message and also outputs the whole command so I can just copy paste it and run (the sudo
is missing, but that’s a minor point).
Error: changing settings via 'tailscale up' requires mentioning all\r\nnon-default flags. To proceed, either re-run your command with --reset or\r\nuse the command below to explicitly mention the current value of\r\nall non-default settings:\r\n\r\n tailscale up --advertise-connector --advertise-tags=tag:aad-connector --advertise-exit-node
That’s such a fine a attention to detail! Kudos.
\n", "content_text": "Ooh, woke up to a blog post from Tailscale about an interesting new feature called app connectors. I am very excited for it!\nBasically you can setup one or more nodes in your tailnet as the exit node for traffic to specific domains. So, for example, say I want to watch Netflix but have it think I am visiting from the US, and I have a tailnet node in the US, I can just set all the Netflix domains (netflix.com, *.netflix.com, and whatever else is needed) to route via this node in the US. I don’t have to set it as my exit node and have everything go via that, I can have just the specific traffic go via that from all my nodes. So neat!\nOf course, Netflix is just a contrived example, but as an IT person this is super useful in other scenarios. For instance, I have my test Entra ID tenant (previously known as Azure AD). Using Conditional Access policies I want to lock down my admin account to specific IPs – like my home public IP for instance. I can do that, but when I am out in a cafe or something I’d then have to use some node in my home as the exit node so the login traffic appears as if from the home public IP, and I don’t really want to do that. But what I can do now though is assign one or more of my nodes at home as the app connector for login.microsoftonline.com (and other domains too I guess) and then all traffic for logging in to Entra ID goes via that node… for all my nodes. It doesn’t matter if I am at home or outside, since all my machines have Tailscale installed by default the traffic for just these domains will automagically go via my home node. So awesome!\nWhat’s more, I have one node at home which also has WireGuard installed on it and I use it along with Tailscale. WireGuard connectors a VPN provider and whenever I want to use this VPN provider from any of my Tailnet machines I would just use this node as the exit node. I can still do that, but now I can also take things one step granular. Say there’s a specific site I always want to visit via this VPN. As of now I’d have to always use this WireGuard node as my exit node just to visit that site, forcing all my traffic to go via that exit node, but now I can just create an app connector to these domains on this particular node and any traffic to these domains from any of my Tailnet machines will go via this WireGuard connected node, thus having a sort of VPN connection just for this domain. :)\nNice!\nBefore I end, one really neat thing about Tailscale’s user friendliness. I had noticed this in the past but get a chance to post it. One of the steps when doing a lot of Tailscale related activities is the run the tailscale up command again with some switches. Problem is, you might already run the switch in the past with a different set of switches, so you can’t simply run it again with only the new switches. You must specify the previous and new switches. But a lot of times you might have forgotten what switches you used in the past, or not sure whether you even sed any switches. How the Tailscale CLI handles this is beautiful, because when I enter a command like this for instance:sudo tailscale up --advertise-connector --advertise-tags=tag:connectorIt does not just error out or tell me to add the previous switches too, it actually gives a helpful error message and also outputs the whole command so I can just copy paste it and run (the sudo is missing, but that’s a minor point).Error: changing settings via 'tailscale up' requires mentioning all\r\nnon-default flags. To proceed, either re-run your command with --reset or\r\nuse the command below to explicitly mention the current value of\r\nall non-default settings:\r\n\r\n tailscale up --advertise-connector --advertise-tags=tag:aad-connector --advertise-exit-nodeThat’s such a fine a attention to detail! Kudos.", "date_published": "2023-12-13T10:08:28+00:00", "date_modified": "2023-12-13T10:09:39+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "tailscale", "vpn", "Wireguard", "Linux & BSD", "Networks" ] }, { "id": "https://rakhesh.com/?p=7430", "url": "https://rakhesh.com/azure/docker-powershell-microsoft-graph/", "title": "Docker, PowerShell, Microsoft Graph", "content_html": "It’s past 1.30am. Ideally I should be in bed, but I am not. Coz I am engrossed with this issue I came across today and it took me down a rabbit hole. It’s been a while since I went down rabbit holes, but here we are!
\nWhat’s the issue? A while back I had blogged about Graph API delta queries. Essentially you can do a delta query to just get the changes to a group since the last time you made a query. Here’s the Microsoft page on it, and notice the example they give?
https://graph.microsoft.com/v1.0/groups/delta/?$filter= id eq '477e9fc6-5de7-4406-bb2a-7e5c83c9ffff' or id eq '004d6a07-fe70-4b92-add5-e6e37b8affff'
This used to work for me in the past, but today when trying the same Graph threw errors.
> Invoke-MgGraphRequest -Method 'GET' -Uri 'https://graph.microsoft.com/v1.0/groups/delta/?$filter= id eq ''477e9fc6-5de7-4406-bb2a-7e5c83c9ffff'' or id eq ''004d6a07-fe70-4b92-add5-e6e37b8affff'''\r\n\r\nInvoke-MgGraphRequest: GET https://graph.microsoft.com/v1.0/groups/delta/%3F%24filter%3D%2520id%2520eq%2520%27477e9fc6-5de7-4406-bb2a-7e5c83c9ffff%27%2520or%2520id%2520eq%2520%27004d6a07-fe70-4b92-add5-e6e37b8affff%27\r\nHTTP/1.1 400 Bad Request\r\nTransfer-Encoding: chunked\r\nVary: Accept-Encoding\r\nStrict-Transport-Security: max-age=31536000\r\nrequest-id: 120bb473-9fa3-4958-8a16-0a6f3616ec08\r\nclient-request-id: 99ec7a6c-7af5-485e-a130-c693e668bc4d\r\nx-ms-ags-diagnostic: {\"ServerInfo\":{\"DataCenter\":\"Canada Central\",\"Slice\":\"E\",\"Ring\":\"5\",\"ScaleUnit\":\"002\",\"RoleInstance\":\"YT2PEPF00000168\"}}\r\nDate: Tue, 12 Dec 2023 01:38:14 GMT\r\nContent-Type: application/json\r\nContent-Encoding: gzip\r\n\r\n{\"error\":{\"code\":\"BadRequest\",\"message\":\"The request URI is not valid. The segment 'delta' must be the last segment in the URI because it is one of the following: $ref, $batch, $count, $value, $metadata, a named media resource, an action, a noncomposable function, an action import, a noncomposable function import, an operation with void return type, or an operation import with void return type.\",\"innerError\":{\"date\":\"2023-12-12T01:38:14\",\"request-id\":\"120bb473-9fa3-4958-8a16-0a6f3616ec08\",\"client-request-id\":\"99ec7a6c-7af5-485e-a130-c693e668bc4d\"}}}
Huh?
\nThis stumped me for a bit. The same URL when put into Graph Explorer worked fine, so I knew things still worked. But why was the cmdlet throwing an error?
\nThe reason seemed to be in how it’s mangling the URL. Notice how it’s become https://graph.microsoft.com/v1.0/groups/delta/%3F%24filter%3D%2520id%2520eq%2520%27477e9fc6-5de7-4406-bb2a-7e5c83c9ffff%27%2520or%2520id%2520eq%2520%27004d6a07-fe70-4b92-add5-e6e37b8affff%27
Adding the -Debug
switch to the cmdlet too showed it was doing that.
Invoke-MgGraphRequest -Method 'GET' -Uri 'https://graph.microsoft.com/v1.0/groups/delta/?$filter= id eq ''477e9fc6-5de7-4406-bb2a-7e5c83c9ffff'' or id eq ''004d6a07-fe70-4b92-add5-e6e37b8affff''' -Debug\r\n\r\nVERBOSE: GET https://graph.microsoft.com/v1.0/groups/delta/%3F%24filter%3D%2520id%2520eq%2520%27477e9fc6-5de7-4406-bb2a-7e5c83c9ffff%27%2520or%2520id%2520eq%2520%27004d6a07-fe70-4b92-add5-e6e37b8affff%27 with 0-byte payload\r\n\r\nConfirm\r\nContinue with this operation?\r\n[Y] Yes [A] Yes to All [H] Halt Command [S] Suspend [?] Help (default is \"Y\"):
My older Runbooks where I use this were working though. They were on an older version of Graph, so it could be that some newer version broke things. How do I verify this?
\nI didn’t want to downgrade my version of the Graph modules, nor do I have multiple VMs lying around to play with this (well, I do… but that’s not much fun). I need some way of firing up temporary instances of something where I could install different versions of the module and see where it breaks.
\nEnter Docker! Something I haven’t touched in ages. :)
\nDoes Microsoft have an official Docker image with Graph modules, perhaps? Why yes, they do – but looks no one’s updated it since Graph 1.28.0.
\nDo they have a PowerShell image? Yes. So I could just use that as my base I suppose.
\nIn theory things should have been straight forward here, but it’s been a while and I installed Docker in a Linux VM and ran into some trouble with the networking side (the build process couldn’t talk to the outside world). After a fair amount of late night Googling I realized I had to change the networking to host networking (as opposed to the default bridge networking which wasn’t working for me) and also add DNS settings for the Docker daemon (thanks to this post) (looks like DNS issues bit me in the past too). Long story short I got this bit working and created the following Dockerfile:
FROM mcr.microsoft.com/powershell\r\nARG GRAPH_VERSION\r\nCOPY Install-Graph.ps1 /root/\r\nRUN pwsh /root/Install-Graph.ps1 $GRAPH_VERSION\r\nCMD [ \"pwsh\" ]
I’ll explain what it does in a bit. The Install-Graph.ps1
script it refers to is:
param([string]$version)\r\nSet-PSRepository -Name 'PSGallery' -InstallationPolicy Trusted\r\n\r\nif ($version) {\r\n Write-Output \"Downloading version $version of Microsoft.Graph module\"\r\n Install-Module Microsoft.Graph -RequiredVersion $version\r\n} else {\r\n Write-Output \"Downloading latest version of Microsoft.Graph module\"\r\n Install-Module Microsoft.Graph\r\n}
This just downloads the version of Graph passed as a parameter to it, or the latest version if no parameter is specified. And what the Dockerfile does is basically 1) pull the Microsoft PowerShell image, 2) take the GRAPH_VERSION
variable from the arguments, if any, 3) copy the PowerShell script to the image, and 4) run it so it downloads the files.
There’s probably better ways of doing this, such as multistage builds, but this quick and dirty method suits my current requirement.
\nI can now build various Graph PowerShell images thus.
\nSay, I want to build one with the latest:
docker build --progress=plain --no-cache . --network host -t graph-latest
Or a specific version:
docker build --progress=plain --no-cache . --network host --build-arg=\"GRAPH_VERSION=2.0.0\" -t graph-2.0.0
And I can then launch these via (this is for the first image I built):
docker run -it --network host -v /path/to/.config/powershell:/root/.config/powershell:ro graph-latest
If I then run the Graph query against the 2.0.0 version of the module I can see it works fine:
Invoke-MgGraphRequest -Method 'GET' -Uri 'https://graph.microsoft.com/v1.0/groups/delta/?$filter= id eq ''477e9fc6-5de7-4406-bb2a-7e5c83c9ffff'' or id eq ''004d6a07-fe70-4b92-add5-e6e37b8affff'''\r\n\r\nName Value\r\n---- -----\r\n@odata.context https://graph.microsoft.com/v1.0/$metadata#groups\r\n@odata.nextLink https://graph.microsoft.com/v1.0/groups/delta/?$skiptoken=l-ojA6nnMNq-BLHBnOZMTzMwDcHVnE3MaXWwhuSPEymMmONnYEJlJxxu7lK\u2026\r\nvalue {}
I then went from version 2.8.0 (just a random starting point 2 months in the past) to 2.9.0 to 2.10.0 and saw that it was working fine in all these versions. Just the latest – 2.11.0 – seemed to be broken. Hah! Just my lucky day, I guess, coz 2.11.0 was released earlier today yesterday.
I thought I’d log an issue in the GitHub repo but looks like Invoke-MgGraphRequest
is broken for other calls too. Someone logged that issue just 4 hours ago… around the time I started fooling around with Docker to look into this. Nice!
Update 13th Dec: The code is on GitHub and I am publishing the container images there. See https://github.com/rakheshster/docker-powershell-msgraph.
\nUpdate 20th Feb 2024: A follow up blog post on auto-updating the Docker image. See https://rakhesh.com/linux-bsd/automatically-publishing-new-versions-of-my-graph-powershell-docker-image/.
\n", "content_text": "It’s past 1.30am. Ideally I should be in bed, but I am not. Coz I am engrossed with this issue I came across today and it took me down a rabbit hole. It’s been a while since I went down rabbit holes, but here we are!\nWhat’s the issue? A while back I had blogged about Graph API delta queries. Essentially you can do a delta query to just get the changes to a group since the last time you made a query. Here’s the Microsoft page on it, and notice the example they give?https://graph.microsoft.com/v1.0/groups/delta/?$filter= id eq '477e9fc6-5de7-4406-bb2a-7e5c83c9ffff' or id eq '004d6a07-fe70-4b92-add5-e6e37b8affff'This used to work for me in the past, but today when trying the same Graph threw errors.> Invoke-MgGraphRequest -Method 'GET' -Uri 'https://graph.microsoft.com/v1.0/groups/delta/?$filter= id eq ''477e9fc6-5de7-4406-bb2a-7e5c83c9ffff'' or id eq ''004d6a07-fe70-4b92-add5-e6e37b8affff'''\r\n\r\nInvoke-MgGraphRequest: GET https://graph.microsoft.com/v1.0/groups/delta/%3F%24filter%3D%2520id%2520eq%2520%27477e9fc6-5de7-4406-bb2a-7e5c83c9ffff%27%2520or%2520id%2520eq%2520%27004d6a07-fe70-4b92-add5-e6e37b8affff%27\r\nHTTP/1.1 400 Bad Request\r\nTransfer-Encoding: chunked\r\nVary: Accept-Encoding\r\nStrict-Transport-Security: max-age=31536000\r\nrequest-id: 120bb473-9fa3-4958-8a16-0a6f3616ec08\r\nclient-request-id: 99ec7a6c-7af5-485e-a130-c693e668bc4d\r\nx-ms-ags-diagnostic: {\"ServerInfo\":{\"DataCenter\":\"Canada Central\",\"Slice\":\"E\",\"Ring\":\"5\",\"ScaleUnit\":\"002\",\"RoleInstance\":\"YT2PEPF00000168\"}}\r\nDate: Tue, 12 Dec 2023 01:38:14 GMT\r\nContent-Type: application/json\r\nContent-Encoding: gzip\r\n\r\n{\"error\":{\"code\":\"BadRequest\",\"message\":\"The request URI is not valid. The segment 'delta' must be the last segment in the URI because it is one of the following: $ref, $batch, $count, $value, $metadata, a named media resource, an action, a noncomposable function, an action import, a noncomposable function import, an operation with void return type, or an operation import with void return type.\",\"innerError\":{\"date\":\"2023-12-12T01:38:14\",\"request-id\":\"120bb473-9fa3-4958-8a16-0a6f3616ec08\",\"client-request-id\":\"99ec7a6c-7af5-485e-a130-c693e668bc4d\"}}}Huh?\nThis stumped me for a bit. The same URL when put into Graph Explorer worked fine, so I knew things still worked. But why was the cmdlet throwing an error?\nThe reason seemed to be in how it’s mangling the URL. Notice how it’s become https://graph.microsoft.com/v1.0/groups/delta/%3F%24filter%3D%2520id%2520eq%2520%27477e9fc6-5de7-4406-bb2a-7e5c83c9ffff%27%2520or%2520id%2520eq%2520%27004d6a07-fe70-4b92-add5-e6e37b8affff%27\nAdding the -Debug switch to the cmdlet too showed it was doing that.Invoke-MgGraphRequest -Method 'GET' -Uri 'https://graph.microsoft.com/v1.0/groups/delta/?$filter= id eq ''477e9fc6-5de7-4406-bb2a-7e5c83c9ffff'' or id eq ''004d6a07-fe70-4b92-add5-e6e37b8affff''' -Debug\r\n\r\nVERBOSE: GET https://graph.microsoft.com/v1.0/groups/delta/%3F%24filter%3D%2520id%2520eq%2520%27477e9fc6-5de7-4406-bb2a-7e5c83c9ffff%27%2520or%2520id%2520eq%2520%27004d6a07-fe70-4b92-add5-e6e37b8affff%27 with 0-byte payload\r\n\r\nConfirm\r\nContinue with this operation?\r\n[Y] Yes [A] Yes to All [H] Halt Command [S] Suspend [?] Help (default is \"Y\"):My older Runbooks where I use this were working though. They were on an older version of Graph, so it could be that some newer version broke things. How do I verify this?\nI didn’t want to downgrade my version of the Graph modules, nor do I have multiple VMs lying around to play with this (well, I do… but that’s not much fun). I need some way of firing up temporary instances of something where I could install different versions of the module and see where it breaks.\nEnter Docker! Something I haven’t touched in ages. :)\nDoes Microsoft have an official Docker image with Graph modules, perhaps? Why yes, they do – but looks no one’s updated it since Graph 1.28.0.\nDo they have a PowerShell image? Yes. So I could just use that as my base I suppose.\nIn theory things should have been straight forward here, but it’s been a while and I installed Docker in a Linux VM and ran into some trouble with the networking side (the build process couldn’t talk to the outside world). After a fair amount of late night Googling I realized I had to change the networking to host networking (as opposed to the default bridge networking which wasn’t working for me) and also add DNS settings for the Docker daemon (thanks to this post) (looks like DNS issues bit me in the past too). Long story short I got this bit working and created the following Dockerfile:FROM mcr.microsoft.com/powershell\r\nARG GRAPH_VERSION\r\nCOPY Install-Graph.ps1 /root/\r\nRUN pwsh /root/Install-Graph.ps1 $GRAPH_VERSION\r\nCMD [ \"pwsh\" ]I’ll explain what it does in a bit. The Install-Graph.ps1 script it refers to is:param([string]$version)\r\nSet-PSRepository -Name 'PSGallery' -InstallationPolicy Trusted\r\n\r\nif ($version) {\r\n Write-Output \"Downloading version $version of Microsoft.Graph module\"\r\n Install-Module Microsoft.Graph -RequiredVersion $version\r\n} else {\r\n Write-Output \"Downloading latest version of Microsoft.Graph module\"\r\n Install-Module Microsoft.Graph\r\n}This just downloads the version of Graph passed as a parameter to it, or the latest version if no parameter is specified. And what the Dockerfile does is basically 1) pull the Microsoft PowerShell image, 2) take the GRAPH_VERSION variable from the arguments, if any, 3) copy the PowerShell script to the image, and 4) run it so it downloads the files.\nThere’s probably better ways of doing this, such as multistage builds, but this quick and dirty method suits my current requirement.\nI can now build various Graph PowerShell images thus.\nSay, I want to build one with the latest:docker build --progress=plain --no-cache . --network host -t graph-latestOr a specific version:docker build --progress=plain --no-cache . --network host --build-arg=\"GRAPH_VERSION=2.0.0\" -t graph-2.0.0And I can then launch these via (this is for the first image I built):docker run -it --network host -v /path/to/.config/powershell:/root/.config/powershell:ro graph-latestIf I then run the Graph query against the 2.0.0 version of the module I can see it works fine:Invoke-MgGraphRequest -Method 'GET' -Uri 'https://graph.microsoft.com/v1.0/groups/delta/?$filter= id eq ''477e9fc6-5de7-4406-bb2a-7e5c83c9ffff'' or id eq ''004d6a07-fe70-4b92-add5-e6e37b8affff'''\r\n\r\nName Value\r\n---- -----\r\n@odata.context https://graph.microsoft.com/v1.0/$metadata#groups\r\n@odata.nextLink https://graph.microsoft.com/v1.0/groups/delta/?$skiptoken=l-ojA6nnMNq-BLHBnOZMTzMwDcHVnE3MaXWwhuSPEymMmONnYEJlJxxu7lK\u2026\r\nvalue {}I then went from version 2.8.0 (just a random starting point 2 months in the past) to 2.9.0 to 2.10.0 and saw that it was working fine in all these versions. Just the latest – 2.11.0 – seemed to be broken. Hah! Just my lucky day, I guess, coz 2.11.0 was released earlier today yesterday.\nI thought I’d log an issue in the GitHub repo but looks like Invoke-MgGraphRequest is broken for other calls too. Someone logged that issue just 4 hours ago… around the time I started fooling around with Docker to look into this. Nice!\nUpdate 13th Dec: The code is on GitHub and I am publishing the container images there. See https://github.com/rakheshster/docker-powershell-msgraph.\nUpdate 20th Feb 2024: A follow up blog post on auto-updating the Docker image. See https://rakhesh.com/linux-bsd/automatically-publishing-new-versions-of-my-graph-powershell-docker-image/.", "date_published": "2023-12-12T02:12:12+00:00", "date_modified": "2024-02-20T18:29:21+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "Docker", "microsoft graph", "powershell", "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7428", "url": "https://rakhesh.com/azure/new-azdatacollectionruleassociation-operation-returned-an-invalid-status-code-badrequest/", "title": "New-AzDataCollectionRuleAssociation \u2013 Operation returned an invalid status code \u2018BadRequest\u2019", "content_html": "Was trying to associate some data collection rules in Azure and the cmdlet kept throwing up this unhelpful error:
Exception type: ErrorResponseCommonV2Exception, Message: Microsoft.Azure.Management.Monitor.Models.ErrorResponseCommonV2Exception: Operation returned an invalid status code 'BadRequest' at Microsoft.Azure.Management.Monitor.DataCollectionRuleAssociationsOperations.<CreateWithHttpMessagesAsync>d__8.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.Monitor.DataCollectionRuleAssociationsOperationsExtensions.<CreateAsync>d__7.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.Monitor.DataCollectionRuleAssociationsOperationsExtensions.Create(IDataCollectionRuleAssociationsOperations operations, String resourceUri, String associationName, DataCollectionRuleAssociationProxyOnlyResource body) at Microsoft.Azure.Commands.Insights.DataCollectionRules.NewAzureRmDataCollectionRuleAssociationCommand.ProcessRecordInternalByDataCollectionRuleId() at Microsoft.Azure.Commands.Insights.MonitorCmdletBase.ExecuteCmdlet(), Code: Null, Status code:Null, Reason phrase: Null
After a bit of trial and error I figured out the issue. The association name shouldn’t contain spaces etc. I was doing -AssociationName \"blah blah\"
while it should have been -AssociationName \"blah-blah\"
.
Could have just told me that!
\n", "content_text": "Was trying to associate some data collection rules in Azure and the cmdlet kept throwing up this unhelpful error:Exception type: ErrorResponseCommonV2Exception, Message: Microsoft.Azure.Management.Monitor.Models.ErrorResponseCommonV2Exception: Operation returned an invalid status code 'BadRequest' at Microsoft.Azure.Management.Monitor.DataCollectionRuleAssociationsOperations.<CreateWithHttpMessagesAsync>d__8.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.Monitor.DataCollectionRuleAssociationsOperationsExtensions.<CreateAsync>d__7.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.Monitor.DataCollectionRuleAssociationsOperationsExtensions.Create(IDataCollectionRuleAssociationsOperations operations, String resourceUri, String associationName, DataCollectionRuleAssociationProxyOnlyResource body) at Microsoft.Azure.Commands.Insights.DataCollectionRules.NewAzureRmDataCollectionRuleAssociationCommand.ProcessRecordInternalByDataCollectionRuleId() at Microsoft.Azure.Commands.Insights.MonitorCmdletBase.ExecuteCmdlet(), Code: Null, Status code:Null, Reason phrase: NullAfter a bit of trial and error I figured out the issue. The association name shouldn’t contain spaces etc. I was doing -AssociationName \"blah blah\" while it should have been -AssociationName \"blah-blah\".\nCould have just told me that!", "date_published": "2023-12-06T10:58:43+00:00", "date_modified": "2023-12-06T10:58:43+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7424", "url": "https://rakhesh.com/mac/bash-complete-nosort-invalid-option-name/", "title": "-bash: complete: nosort: invalid option name", "content_html": "Some months ago I started getting the following error whenever I’d launch bash on my Mac.
-bash: complete: nosort: invalid option name
A quick Google gave me the impression it was to do with WireGuard. I do have WireGuard installed on the Mac, so I figured there’s nothing I can do and left it.
\nToday, I wasn’t in a mood to leave it so thought I’d try and fix it. Looks like it’s a side effect of wireguard-tools
actually, which I don’t have installed. Maybe the macOS WireGuard software does something? But I couldn’t find anything in the completions folder to do with WireGuard.
On an M1 (and above) the Bash completion files are at /opt/homebrew/etc/bash_completion.d
. So I went to that folder and did the following:
grep -OR nosort *
The -R
switch means recurse; the -O
switch, which I missed initially and didn’t realize was needed, tells grep
to follow symbolic links. That’s important coz this folder has symbolic links and without this switch the grep
command won’t do anything. What I am doing here is searching for the word “nosort” in all the files of that folder. Apparently the complete
command has this invalid switch passed to it in some file.
Sure enough, I got one result:
gpg-tui.bash:complete -F _gpg-tui -o nosort -o bashdefault -o default gpg-tui
I don’t use gpg-tui
much, had installed it when I came across it somewhere. So I uninstalled it and problem was gone. If I were using gpg-tui
I’d have just edited the file and removed this option I guess.
When using an app like PDF Expert or Adobe Acrobat on a mobile\u00a0 phone, if the user wants to open documents from OneDrive for business they have to setup a connection with the tenant first. Else they get warnings like these:
\n\n | \n |
The actual workflow of where they get the prompt varies. In the case of Adobe, it launches the Microsoft Authenticator app to authenticate the user, and then throws the above warning. PDF Expert, on the other hand, asks the user to sign-in in an integrated browser and after doing so errors out.
\nWhat\u2019s happening here is that the app needs access to our tenant to read the user\u2019s files etc. In a \u201cdesktop\u201d scenario this is a case of getting the authorize endpoint and signing in with an admin account that has permissions; but in the case of mobile devices that is not possible. I need a workaround – some way of extracting the authorize endpoint URL the app is sending my tenant, and launching that on a desktop as an admin.
\nIn my case this was an iOS device. Here’s what I did to sort out PDF Expert.
\nI could have installed a proxy like Fiddler to capture the phone traffic, but that’s usually very involved. Instead, I Googled around for any iOS proxy apps and came across a 3rd party app called Proxyman.
\nUsing it is very easy. I installed the app, followed its instructions to download a config profile (this is to enable VPN so it can capture the traffic) and set its certificate as trusted (so it can capture SSL traffic). Also, in the SSL Proxying List section I added \u201clogin.microsoftonline.com\u201d.
\n\n | \n |
After that, I toggled the \u201cEnable VPN\u201d switch; launched PDF Expert and tried to add a OneDrive connection as before. I didn’t actually sign in, merely enabling the connection which in turn opens the browser window asking you to login is enough.
\nIn the Proxyman app if I now search for login.microsoftonline.com I will see an entry like this:
\n\nNote the \u201c/authorize\u201d endpoint.
\nHere’s the URL for anyone else stumbling upon this issue with PDF Expert: https://login.microsoftonline.com/common/oauth2/v2.0/authorize?nonce=f7buTVb2jsYqIdglEfwxW4OYWSZ8pu3C6e6ArYXij1A&response_type=code&code_challenge_method=S256&scope=https://graph.microsoft.com/User.Read%20https://graph.microsoft.com/Files.ReadWrite.All%20offline_access&code_challenge=pUnIFpJSSbjD0NIAH3jDOBfCShezfC52bzwD51WhOy0&redirect_uri=msauth.com.readdle.pdfexpert5://auth/&client_id=8e27befb-4e35-4688-a548-769600f7b04e&state=qT-xZF7DIkEesGF_dy9TsZr8YSTqGtDcVJHKrF0IumA
\nI copy pasted this URL and visited it in on my desktop with an admin account after enabling the \u201cApplication Administrator\u201d (or more powerful) role.
\nThis brings up a window asking for permissions:
\n\nI accepted that.
\nNext, I logged in to Entra ID portal and found “PDF Expert” under Enterprise Applications. I went to the Permissions section to see what permissions were granted.
\n\nLooks good, Delegated permissions that lets the signed in user read their files and such.
\nAt this point if an end user tries to use PDF Expert it still won’t work as I haven’t consented on behalf of the firm. So I clicked the button that did consent for everyone. This added more permissions to the list, with the result that I now have a consent for the following permissions for everyone:
\n\nI removed the ones I felt were unnecessary – the last two especially, and also the Sites.Manage.All
Graph permission. If need be I can grant those later. The end result was:
After that I tested as a user and I was successfully able to connect PDF Expert with OneDrive.
\nUpdate (12th Oct 2023): I did the same for Adobe Acrobat today and here’s the URL for that:
\nhttps://login.microsoftonline.com/common/oauth2/v2.0/authorize?x-app-name=Acrobat&x-client-brkrver=3.3.0&login_hint=<email>%40<address.tld>&x-client-Ver=1.2.15&brkr=1&client-request-id=8B8E91F6-AF4D-4F37-B7BB-5B0C1B3D78BD&x-client-src-SKU=MSAL.iOS&response_type=code&redirect_uri=msauth.com.adobe.Adobe-Reader%3A%2F%2Fauth&x-client-CPU=64&x-app-ver=23.08.01&haschrome=1&state=QkRDNkQ5NDQtRUIyRC00MTMzLUE4QjAtRDNCRDI2MkYyQkEz&return-client-request-id=true&X-AnchorMailbox=Oid%3A78cdec0a-739e-4612-8ac6-d2e78580042d%40<tenantId>&scope=Files.ReadWrite.All%20User.Read%20openid%20profile%20offline_access&domain_req=<tenantId>&claims=%7B%22access_token%22%3A%7B%22xms_cc%22%3A%7B%22values%22%3A%5B%22protapp%22%5D%7D%7D%7D&x-client-SKU=MSAL.iOS&client_id=cf90ab8f-8091-4c2d-b6a9-0b89a3312382&x-client-OS=17.0.1&client_info=1&domain_hint=organizations&x-client-DM=iPhone&login_req=78cdec0a-739e-4612-8ac6-d2e78580042d
Got to replace some bits like the tenantId and email address (the %40
character is @
).
Last week I had blogged about ExchangeOnlineManagement
and Az
module troubles with PowerShell 7.2. This week I ran into another issue as I moved more Runbooks over to PowerShell 7.2.
Some of them started failing for no reason. It happened when I’d do a Connect-PnPOnline
to connect to a SharePoint site, and the error was: Host not reachable.
Such a weird one, coz if I try and connect to the site from the Hybrid Runbook Worker this Runbook runs on, I can connect to the site. Moreover, most of my Runbooks work fine – even though they all connect to the same site and run from the same HRW – just a few failed. This stumped me for a bit.
\nThen I realized the ones that fail were using this throttling function I had created. It basically checks if there’s another instance of the Runbook already running, and if so quits or waits. Hmm, why was that causing things to fail?
\nYes the throttling function connects to Azure and does some stuff, but I was connecting to Azure in all the other runbooks anyway (to read Key Vaults and such) and that had no issue. Digging more, I realized the issue was with the Az.Resources
module. The cmdlets used by that function make use of this module, and looks like that conflicts with PnP.PowerShell. Eugh.
Looks like this is fixed in the upcoming 2.3.0 release of PnP.PowerShell (still at 2.2.0 as of writing) – that doesn’t help me currently. I can’t update my production Runbooks to using nightly versions of the module just to fix this issue. I could, of course, remove the throttling function – which is what I did in the interim – but I wasn’t happy with that. I can’t have these Runbooks running concurrently.
\nLast we met my throttling function it looked like this:
# This is a Function I created (from various Google results) to throttle a Runbook.\r\n# It will either wait or quit the runbook. \r\nfunction Throttle-AzRunbook {\r\n param(\r\n [switch]$quitRatherThanWait,\r\n [int]$numberOfInstances = 1\r\n )\r\n\r\n # Connect to Azure. With a Managed Identity in this case as that's what I use. \r\n # It's like I am already connectes but I can't assume that within this function. \r\n # Must connect to Azure before running Get-AzAutomationJob or Get-AzResource\r\n try {\r\n # From https://docs.microsoft.com/en-us/azure/automation/enable-managed-identity-for-automation#authenticate-access-with-system-assigned-managed-identity\r\n # Ensures you do not inherit an AzContext in your runbook\r\n Disable-AzContextAutosave -Scope Process | Out-Null\r\n\r\n # Connect to Azure with system-assigned managed identity\r\n Connect-AzAccount -Identity | Out-Null\r\n\r\n } catch {\r\n Write-Error \"Runbook could not connect to Azure: $($_.Exception.Message)\"\r\n exit\r\n }\r\n\r\n # Get the Job ID from PSPrivateMetadata. That's the only thing it contains!\r\n $automationJobId = $PSPrivateMetadata.JobId.Guid\r\n\r\n # Get all Runbooks in the current subscription\r\n $allAutomationAccounts = Get-AzResource -ResourceType Microsoft.Automation/automationAccounts\r\n\r\n $automationAccountName = $null\r\n $resourceGroupName = $null\r\n $runbookName = $null\r\n\r\n foreach ($automationAccount in $allAutomationAccounts) {\r\n $runbookJob = Get-AzAutomationJob -AutomationAccountName $automationAccount.Name `\r\n -ResourceGroupName $automationAccount.ResourceGroupName `\r\n -Id $automationJobId `\r\n -ErrorAction SilentlyContinue\r\n\r\n if (!([string]::IsNullOrEmpty($runbookJob))) {\r\n $automationAccountName = $runbookJob.AutomationAccountName\r\n $resourceGroupName = $runbookJob.ResourceGroupName\r\n $runbookName = $runbookJob.RunbookName\r\n }\r\n }\r\n\r\n # At this point I'll have the Automation Account Name, Runbook Name, Job ID and Resource Group Name, \r\n # Find all other active jobs of this Runbook.\r\n\r\n $allActiveJobs = Get-AzAutomationJob -AutomationAccountName $automationAccountName `\r\n -ResourceGroupName $resourceGroupName `\r\n -RunbookName $runbookName | \r\n Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\")} \r\n\r\n if ($quitRatherThanWait.IsPresent -and $allActiveJobs.Count -gt $numberOfInstances) {\r\n Write-Output \"Exiting as another job is already running\"\r\n exit\r\n\r\n } else {\r\n $oldestJob = $AllActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n\r\n # If this job is not the oldest created job we will wait until the existing jobs complete or the number of jobs is less than numberOfInstances\r\n while (($AutomationJobID -ne $oldestJob.JobId) -and ($allActiveJobs.Count -ge $numberOfInstances)) {\r\n Write-Output \"Waiting as there are currently running $($allActiveJobs.Count) active jobs for this runbook already. Sleeping 30 seconds...\"\r\n Write-Output \"Oldest Job is $($oldestJob.JobId)\"\r\n \r\n Start-Sleep -Seconds 30\r\n \r\n $allActiveJobs = Get-AzAutomationJob -AutomationAccountName $automationAccountName `\r\n -ResourceGroupName $resourceGroupName `\r\n -RunbookName $runbookName | \r\n Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\")} \r\n \r\n $oldestJob = $allActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n } \r\n \r\n Write-Output \"Job can continue...\"\r\n }\r\n}
Turns out this doesn’t work with PowerShell 7.2 and HRWs as the PSPrivateMetadata
variable is not present in 7.2 + HRWs. (It is present in 5.x + HRWs and even 7.2 running on Azure – so it’s one of those things that will appear in the future I guess).
This means I can’t extract the JobId and use it to search other jobs. What can I do here? After some tinkering I realized I can cheat and extract the JobId from one of the trace log files. You see, every HRW Runbook writes to this path:
\n\nThe highlighted bit varies per runbook.
\nThe file there looks like this:
Orchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:46:51.8046468Z] Starting sandbox process. [sandboxId=1a16c23f-90f5-46de-a7a8-213eed634246]\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:46:52.0546465Z] Hybrid Sandbox\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:46:52.6485351Z] First Trace Log.\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:46:52.8984865Z] Sandbox Recieving Job. [sandboxId=1a16c23f-90f5-46de-a7a8-213eed634246][jobId=b20c76be-b132-4679-84e2-17b244734f65]\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:48:23.0674290Z] Sandbox close request. The sandbox will exit immediately. [sandboxId=1a16c23f-90f5-46de-a7a8-213eed634246]\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:48:23.0674290Z] Leaving sandbox process. [sandboxId=1a16c23f-90f5-46de-a7a8-213eed634246]
Neat, so line 4 has the JobId.
\nWhat can I do to find this path to this file? Turns out $PSScriptRoot
has it. Split its path to get the parent, tack on \"\\diags\\trace.log\"
and that’s my file. I can essentially do something like this to get the Id if it’s not found:
if (!$automationJobId) {\r\n Write-Output \"Unable to find JobID from PSPrivateMetadata\"\r\n if ($PWD -match \"HybridWorker\") {\r\n Write-Output \"Trying a workaround to find JobID as this is an HRW\"\r\n\r\n $parentPath = Split-Path -Parent $PSScriptRoot\r\n $fullPath = $parentPath + \"\\diags\\trace.log\"\r\n\r\n try {\r\n $automationJobId = ((Get-Content $fullPath -ErrorAction Stop | Select-String \"jobId\") -split 'jobId=')[1] -replace ']',''\r\n\r\n } catch {\r\n $automationJobId = $null\r\n }\r\n }\r\n}
With this in hand my throttling function now looks like this:
function Throttle-AzRunbook {\r\n param(\r\n [switch]$quitRatherThanWait,\r\n [int]$numberOfInstances = 1\r\n )\r\n\r\n # Connect to Azure. With a Managed Identity in this case as that's what I use. \r\n # It's like I am already connectes but I can't assume that within this function. \r\n # Must connect to Azure before running Get-AzAutomationJob or Get-AzResource\r\n try {\r\n # From https://docs.microsoft.com/en-us/azure/automation/enable-managed-identity-for-automation#authenticate-access-with-system-assigned-managed-identity\r\n # Ensures you do not inherit an AzContext in your runbook\r\n Disable-AzContextAutosave -Scope Process | Out-Null\r\n\r\n # Connect to Azure with system-assigned managed identity\r\n Connect-AzAccount -Identity | Out-Null\r\n\r\n } catch {\r\n Write-Error \"Runbook could not connect to Azure: $($_.Exception.Message)\"\r\n exit\r\n }\r\n\r\n # Get the Job ID from PSPrivateMetadata. That's the only thing it contains!\r\n $automationJobId = $PSPrivateMetadata.JobId.Guid\r\n\r\n # A workaround for PowerShell 7.x and HRW where $PSPrivateMetadata is missing\r\n # I extract it from the trace.log file instead\r\n if (!$automationJobId) {\r\n Write-Output \"Unable to find JobID from PSPrivateMetadata\"\r\n if ($PWD -match \"HybridWorker\") {\r\n Write-Output \"Trying a workaround to find JobID as this is an HRW\"\r\n\r\n $parentPath = Split-Path -Parent $PSScriptRoot\r\n $fullPath = $parentPath + \"\\diags\\trace.log\"\r\n \r\n try {\r\n $automationJobId = ((Get-Content $fullPath -ErrorAction Stop | Select-String \"jobId\") -split 'jobId=')[1] -replace ']',''\r\n\r\n } catch {\r\n $automationJobId = $null\r\n }\r\n }\r\n }\r\n\r\n if ($automationJobId) {\r\n Write-Output \"JobID is $automationJobId\"\r\n # Get all Runbooks in the current subscription\r\n $allAutomationAccounts = Get-AzResource -ResourceType Microsoft.Automation/automationAccounts\r\n\r\n $automationAccountName = $null\r\n $resourceGroupName = $null\r\n $runbookName = $null\r\n\r\n foreach ($automationAccount in $allAutomationAccounts) {\r\n $runbookJobParams = @{\r\n \"AutomationAccountName\" = $automationAccount.Name\r\n \"ResourceGroupName\" = $automationAccount.ResourceGroupName\r\n \"Id\" = $automationJobId\r\n \"ErrorAction\" = \"SilentlyContinue\"\r\n }\r\n\r\n $runbookJob = Get-AzAutomationJob @runbookJobParams\r\n\r\n if (!([string]::IsNullOrEmpty($runbookJob))) {\r\n $automationAccountName = $runbookJob.AutomationAccountName\r\n $resourceGroupName = $runbookJob.ResourceGroupName\r\n $runbookName = $runbookJob.RunbookName\r\n }\r\n }\r\n\r\n # At this point I'll have the Automation Account Name, Runbook Name, Job ID and Resource Group Name, \r\n # Find all other active jobs of this Runbook.\r\n $runbookJobParams = @{\r\n \"AutomationAccountName\" = $automationAccountName\r\n \"ResourceGroupName\" = $resourceGroupName\r\n \"RunbookName\" = $runbookName\r\n \"ErrorAction\" = \"SilentlyContinue\"\r\n }\r\n\r\n $allActiveJobs = Get-AzAutomationJob @runbookJobParams | Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\") -or ($_.Status -eq \"Activating\") -or ($_.Status -eq \"Resuming\") }\r\n\r\n if ($allActiveJobs.Count -gt $numberOfInstances) {\r\n if ($quitRatherThanWait.IsPresent) {\r\n Write-Output \"Exiting as another job is already running\"\r\n exit\r\n \r\n } else {\r\n $oldestJob = $AllActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n \r\n # If this job is not the oldest created job we will wait until the existing jobs complete or the number of jobs is less than numberOfInstances\r\n while (($AutomationJobID -ne $oldestJob.JobId) -and ($allActiveJobs.Count -ge $numberOfInstances)) {\r\n Write-Output \"Waiting as there are currently running $($allActiveJobs.Count) active jobs for this runbook already. Sleeping 30 seconds...\"\r\n Write-Output \"Oldest Job is $($oldestJob.JobId)\"\r\n \r\n Start-Sleep -Seconds 30\r\n \r\n $allActiveJobs = Get-AzAutomationJob @runbookJobParams | Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\") -or ($_.Status -eq \"Activating\") -or ($_.Status -eq \"Resuming\") }\r\n $oldestJob = $allActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n } \r\n \r\n Write-Output \"Job can continue...\"\r\n }\r\n\r\n } else {\r\n Write-Output \"No other concurrent jobs found...\"\r\n }\r\n \r\n } else {\r\n Write-Warning \"Unable to find JobID. Poceeding with job, this might result in concurrent executions\"\r\n if ($PSVersionTable.PSVersion.Major -eq 7 -and $PWD -match \"HybridWorker\") {\r\n Write-Output \"This is PowerShell 7.x in HRW - that explains it!\"\r\n }\r\n }\r\n}\n
Ok, so what can I do to fix PnP PowerShell? Can’t I just unload the Az.Resources
module after its done? Yes, I can (Remove-Module
) but that doesn’t unload any of the loaded assemblies, and since those are usually the source of conflict Remove-Module
can’t help us.
What can I do regarding assemblies? In my previous post I had alluded to this very informative article from Microsoft. It suggests three ways to work around this issue:
\nWith the job system you start the function as a separate job basically. And since it runs independent of the main script, the modules & assemblies it loads too are independent. When the job exits these are removed. Awesome!
\nTyically the solution is simple:
$result = Start-Job { Invoke-ConflictingCommand } | Receive-Job -Wait
In my case this is a function. How the heck do I get that in there? I could of course define the function within the Start-Job
, but I don’t want that. I want to keep my code consistent across Runbooks. Thanks to a helpful StackOverflow post I learnt I can do the following:
function FOO { \"HEY\" }\r\n\r\nStart-Job -ScriptBlock { \r\n\r\n # Redefine function FOO in the context of this job.\r\n $function:FOO = \"$using:function:FOO\" \r\n \r\n # Now FOO can be invoked.\r\n FOO\r\n\r\n} | Receive-Job -Wait -AutoRemoveJob
So all I have to do is:
Start-Job {\r\n ${function:Throttle-AzRunbook} = \"${using:function:Throttle-AzRunbook}\"\r\n\r\n Throttle-AzRunbook -ScriptRoot $ScriptRoot\r\n\r\n} | Receive-Job -Wait -AutoRemoveJob
I need to use the curly braces because of the dash in the name, else it complains.
\nI didn’t know of this function name space. That’s useful.
\nTwo issues with this.
\nOne: my function complains that it can’t find PSScriptRoot
any more. Apparently that’s how it is. So I modified the function to take this as an input parameter:
function Throttle-AzRunbook {\r\n param(\r\n [Parameter(Mandatory=$false)]\r\n [switch]$quitRatherThanWait,\r\n\r\n [Parameter(Mandatory=$false)]\r\n [int]$numberOfInstances = 1,\r\n\r\n # If invoked from Start-Job pass the $PSScriptRoot as $ScriptRoot\r\n [Parameter(Mandatory=$false)]\r\n [string]$ScriptRoot\r\n )\r\n\r\n # Connect to Azure. With a Managed Identity in this case as that's what I use. \r\n # I could be already connected but I can't assume that within this function. \r\n # Must connect to Azure before running Get-AzAutomationJob or Get-AzResource\r\n # Note that this loads the Az.Resources module.\r\n try {\r\n # From https://docs.microsoft.com/en-us/azure/automation/enable-managed-identity-for-automation#authenticate-access-with-system-assigned-managed-identity\r\n # Ensures you do not inherit an AzContext in your runbook\r\n Disable-AzContextAutosave -Scope Process | Out-Null\r\n\r\n # Connect to Azure with system-assigned managed identity\r\n Connect-AzAccount -Identity | Out-Null\r\n\r\n } catch {\r\n Write-Error \"Runbook could not connect to Azure: $($_.Exception.Message)\"\r\n exit\r\n }\r\n\r\n Write-Output \"Checking whether to throttle...\"\r\n # Get the Job ID from PSPrivateMetadata. That's the only thing it contains!\r\n $automationJobId = $PSPrivateMetadata.JobId.Guid\r\n\r\n # A workaround for PowerShell 7.x and HRW where $PSPrivateMetadata is missing\r\n # I extract it from the trace.log file instead\r\n if (!$automationJobId) {\r\n Write-Output \"Unable to find JobID from PSPrivateMetadata\"\r\n if ($PWD -match \"HybridWorker\") {\r\n Write-Output \"Trying a workaround to find JobID as this is an HRW\"\r\n\r\n if ($ScriptRoot) {\r\n $parentPath = Split-Path -Parent $ScriptRoot\r\n\r\n } elseif ($PSScriptRoot) {\r\n $parentPath = Split-Path -Parent $PSScriptRoot\r\n\r\n } else {\r\n $parentPath = $null\r\n }\r\n\r\n if ($parentPath) {\r\n $fullPath = $parentPath + \"\\diags\\trace.log\"\r\n \r\n try {\r\n $automationJobId = ((Get-Content $fullPath -ErrorAction Stop | Select-String \"jobId\") -split 'jobId=')[1] -replace ']',''\r\n \r\n } catch {\r\n $automationJobId = $null\r\n }\r\n }\r\n }\r\n }\r\n\r\n if ($automationJobId) {\r\n Write-Output \"JobID is $automationJobId\"\r\n # Get all Runbooks in the current subscription\r\n $allAutomationAccounts = Get-AzResource -ResourceType Microsoft.Automation/automationAccounts\r\n\r\n $automationAccountName = $null\r\n $resourceGroupName = $null\r\n $runbookName = $null\r\n\r\n foreach ($automationAccount in $allAutomationAccounts) {\r\n $runbookJobParams = @{\r\n \"AutomationAccountName\" = $automationAccount.Name\r\n \"ResourceGroupName\" = $automationAccount.ResourceGroupName\r\n \"Id\" = $automationJobId\r\n \"ErrorAction\" = \"SilentlyContinue\"\r\n }\r\n\r\n $runbookJob = Get-AzAutomationJob @runbookJobParams\r\n\r\n if (!([string]::IsNullOrEmpty($runbookJob))) {\r\n $automationAccountName = $runbookJob.AutomationAccountName\r\n $resourceGroupName = $runbookJob.ResourceGroupName\r\n $runbookName = $runbookJob.RunbookName\r\n }\r\n }\r\n\r\n # At this point I'll have the Automation Account Name, Runbook Name, Job ID and Resource Group Name, \r\n # Find all other active jobs of this Runbook.\r\n $runbookJobParams = @{\r\n \"AutomationAccountName\" = $automationAccountName\r\n \"ResourceGroupName\" = $resourceGroupName\r\n \"RunbookName\" = $runbookName\r\n \"ErrorAction\" = \"SilentlyContinue\"\r\n }\r\n\r\n $allActiveJobs = Get-AzAutomationJob @runbookJobParams | Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\") -or ($_.Status -eq \"Activating\") -or ($_.Status -eq \"Resuming\") }\r\n\r\n if ($allActiveJobs.Count -gt $numberOfInstances) {\r\n if ($quitRatherThanWait.IsPresent) {\r\n Write-Output \"Exiting as another job is already running\"\r\n exit\r\n \r\n } else {\r\n $oldestJob = $AllActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n \r\n # If this job is not the oldest created job we will wait until the existing jobs complete or the number of jobs is less than numberOfInstances\r\n while (($AutomationJobID -ne $oldestJob.JobId) -and ($allActiveJobs.Count -ge $numberOfInstances)) {\r\n Write-Output \"Waiting as there are currently running $($allActiveJobs.Count) active jobs for this runbook already. Sleeping 30 seconds...\"\r\n Write-Output \"Oldest Job is $($oldestJob.JobId)\"\r\n \r\n Start-Sleep -Seconds 30\r\n \r\n $allActiveJobs = Get-AzAutomationJob @runbookJobParams | Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\") -or ($_.Status -eq \"Activating\") -or ($_.Status -eq \"Resuming\") }\r\n $oldestJob = $allActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n } \r\n \r\n Write-Output \"Job can continue...\"\r\n }\r\n\r\n } else {\r\n Write-Output \"No other concurrent jobs found...\"\r\n }\r\n \r\n } else {\r\n Write-Warning \"Unable to find JobID. Poceeding with job, this might result in concurrent executions\"\r\n if ($PSVersionTable.PSVersion.Major -eq 7 -and $PWD -match \"HybridWorker\") {\r\n Write-Output \"This is PowerShell 7.x in HRW - that explains it!\"\r\n }\r\n }\r\n}
And I will pass that as an input.
\nThe second issue was that none of the Write-Output
output from the function was appearing. I got it working by changing things a bit so here’s what my Start-Job
looks like now (this includes the change to pass PSScriptRoot
to the function; I make use of $using
for that):
$job = Start-Job {\r\n ${function:Throttle-AzRunbook} = \"${using:function:Throttle-AzRunbook}\"\r\n $ScriptRoot = $using:PSScriptRoot\r\n Throttle-AzRunbook -ScriptRoot $ScriptRoot\r\n}\r\n\r\nReceive-Job -Wait $job
For some reason having Receive-Job
separately got it to show the output.
And that’s it! Now I have throttling working with PowerShell 7.2 and HRWs. I also hopefully know how to tackle any further conflicts between these various modules.
\n", "content_text": "Last week I had blogged about ExchangeOnlineManagement and Az module troubles with PowerShell 7.2. This week I ran into another issue as I moved more Runbooks over to PowerShell 7.2.\nSome of them started failing for no reason. It happened when I’d do a Connect-PnPOnline to connect to a SharePoint site, and the error was: Host not reachable.\nSuch a weird one, coz if I try and connect to the site from the Hybrid Runbook Worker this Runbook runs on, I can connect to the site. Moreover, most of my Runbooks work fine – even though they all connect to the same site and run from the same HRW – just a few failed. This stumped me for a bit.\nThen I realized the ones that fail were using this throttling function I had created. It basically checks if there’s another instance of the Runbook already running, and if so quits or waits. Hmm, why was that causing things to fail?\nYes the throttling function connects to Azure and does some stuff, but I was connecting to Azure in all the other runbooks anyway (to read Key Vaults and such) and that had no issue. Digging more, I realized the issue was with the Az.Resources module. The cmdlets used by that function make use of this module, and looks like that conflicts with PnP.PowerShell. Eugh.\nLooks like this is fixed in the upcoming 2.3.0 release of PnP.PowerShell (still at 2.2.0 as of writing) – that doesn’t help me currently. I can’t update my production Runbooks to using nightly versions of the module just to fix this issue. I could, of course, remove the throttling function – which is what I did in the interim – but I wasn’t happy with that. I can’t have these Runbooks running concurrently.\nAn update on the throttling function\nLast we met my throttling function it looked like this:# This is a Function I created (from various Google results) to throttle a Runbook.\r\n# It will either wait or quit the runbook. \r\nfunction Throttle-AzRunbook {\r\n param(\r\n [switch]$quitRatherThanWait,\r\n [int]$numberOfInstances = 1\r\n )\r\n\r\n # Connect to Azure. With a Managed Identity in this case as that's what I use. \r\n # It's like I am already connectes but I can't assume that within this function. \r\n # Must connect to Azure before running Get-AzAutomationJob or Get-AzResource\r\n try {\r\n # From https://docs.microsoft.com/en-us/azure/automation/enable-managed-identity-for-automation#authenticate-access-with-system-assigned-managed-identity\r\n # Ensures you do not inherit an AzContext in your runbook\r\n Disable-AzContextAutosave -Scope Process | Out-Null\r\n\r\n # Connect to Azure with system-assigned managed identity\r\n Connect-AzAccount -Identity | Out-Null\r\n\r\n } catch {\r\n Write-Error \"Runbook could not connect to Azure: $($_.Exception.Message)\"\r\n exit\r\n }\r\n\r\n # Get the Job ID from PSPrivateMetadata. That's the only thing it contains!\r\n $automationJobId = $PSPrivateMetadata.JobId.Guid\r\n\r\n # Get all Runbooks in the current subscription\r\n $allAutomationAccounts = Get-AzResource -ResourceType Microsoft.Automation/automationAccounts\r\n\r\n $automationAccountName = $null\r\n $resourceGroupName = $null\r\n $runbookName = $null\r\n\r\n foreach ($automationAccount in $allAutomationAccounts) {\r\n $runbookJob = Get-AzAutomationJob -AutomationAccountName $automationAccount.Name `\r\n -ResourceGroupName $automationAccount.ResourceGroupName `\r\n -Id $automationJobId `\r\n -ErrorAction SilentlyContinue\r\n\r\n if (!([string]::IsNullOrEmpty($runbookJob))) {\r\n $automationAccountName = $runbookJob.AutomationAccountName\r\n $resourceGroupName = $runbookJob.ResourceGroupName\r\n $runbookName = $runbookJob.RunbookName\r\n }\r\n }\r\n\r\n # At this point I'll have the Automation Account Name, Runbook Name, Job ID and Resource Group Name, \r\n # Find all other active jobs of this Runbook.\r\n\r\n $allActiveJobs = Get-AzAutomationJob -AutomationAccountName $automationAccountName `\r\n -ResourceGroupName $resourceGroupName `\r\n -RunbookName $runbookName | \r\n Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\")} \r\n\r\n if ($quitRatherThanWait.IsPresent -and $allActiveJobs.Count -gt $numberOfInstances) {\r\n Write-Output \"Exiting as another job is already running\"\r\n exit\r\n\r\n } else {\r\n $oldestJob = $AllActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n\r\n # If this job is not the oldest created job we will wait until the existing jobs complete or the number of jobs is less than numberOfInstances\r\n while (($AutomationJobID -ne $oldestJob.JobId) -and ($allActiveJobs.Count -ge $numberOfInstances)) {\r\n Write-Output \"Waiting as there are currently running $($allActiveJobs.Count) active jobs for this runbook already. Sleeping 30 seconds...\"\r\n Write-Output \"Oldest Job is $($oldestJob.JobId)\"\r\n \r\n Start-Sleep -Seconds 30\r\n \r\n $allActiveJobs = Get-AzAutomationJob -AutomationAccountName $automationAccountName `\r\n -ResourceGroupName $resourceGroupName `\r\n -RunbookName $runbookName | \r\n Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\")} \r\n \r\n $oldestJob = $allActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n } \r\n \r\n Write-Output \"Job can continue...\"\r\n }\r\n}Turns out this doesn’t work with PowerShell 7.2 and HRWs as the PSPrivateMetadata variable is not present in 7.2 + HRWs. (It is present in 5.x + HRWs and even 7.2 running on Azure – so it’s one of those things that will appear in the future I guess).\nThis means I can’t extract the JobId and use it to search other jobs. What can I do here? After some tinkering I realized I can cheat and extract the JobId from one of the trace log files. You see, every HRW Runbook writes to this path:\n\nThe highlighted bit varies per runbook.\nThe file there looks like this:Orchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:46:51.8046468Z] Starting sandbox process. [sandboxId=1a16c23f-90f5-46de-a7a8-213eed634246]\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:46:52.0546465Z] Hybrid Sandbox\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:46:52.6485351Z] First Trace Log.\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:46:52.8984865Z] Sandbox Recieving Job. [sandboxId=1a16c23f-90f5-46de-a7a8-213eed634246][jobId=b20c76be-b132-4679-84e2-17b244734f65]\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:48:23.0674290Z] Sandbox close request. The sandbox will exit immediately. [sandboxId=1a16c23f-90f5-46de-a7a8-213eed634246]\r\nOrchestrator.Sandbox.Diagnostics Critical: 0 : [2023-09-26T09:48:23.0674290Z] Leaving sandbox process. [sandboxId=1a16c23f-90f5-46de-a7a8-213eed634246]Neat, so line 4 has the JobId.\nWhat can I do to find this path to this file? Turns out $PSScriptRoot has it. Split its path to get the parent, tack on \"\\diags\\trace.log\" and that’s my file. I can essentially do something like this to get the Id if it’s not found:if (!$automationJobId) {\r\n Write-Output \"Unable to find JobID from PSPrivateMetadata\"\r\n if ($PWD -match \"HybridWorker\") {\r\n Write-Output \"Trying a workaround to find JobID as this is an HRW\"\r\n\r\n $parentPath = Split-Path -Parent $PSScriptRoot\r\n $fullPath = $parentPath + \"\\diags\\trace.log\"\r\n\r\n try {\r\n $automationJobId = ((Get-Content $fullPath -ErrorAction Stop | Select-String \"jobId\") -split 'jobId=')[1] -replace ']',''\r\n\r\n } catch {\r\n $automationJobId = $null\r\n }\r\n }\r\n}With this in hand my throttling function now looks like this:function Throttle-AzRunbook {\r\n param(\r\n [switch]$quitRatherThanWait,\r\n [int]$numberOfInstances = 1\r\n )\r\n\r\n # Connect to Azure. With a Managed Identity in this case as that's what I use. \r\n # It's like I am already connectes but I can't assume that within this function. \r\n # Must connect to Azure before running Get-AzAutomationJob or Get-AzResource\r\n try {\r\n # From https://docs.microsoft.com/en-us/azure/automation/enable-managed-identity-for-automation#authenticate-access-with-system-assigned-managed-identity\r\n # Ensures you do not inherit an AzContext in your runbook\r\n Disable-AzContextAutosave -Scope Process | Out-Null\r\n\r\n # Connect to Azure with system-assigned managed identity\r\n Connect-AzAccount -Identity | Out-Null\r\n\r\n } catch {\r\n Write-Error \"Runbook could not connect to Azure: $($_.Exception.Message)\"\r\n exit\r\n }\r\n\r\n # Get the Job ID from PSPrivateMetadata. That's the only thing it contains!\r\n $automationJobId = $PSPrivateMetadata.JobId.Guid\r\n\r\n # A workaround for PowerShell 7.x and HRW where $PSPrivateMetadata is missing\r\n # I extract it from the trace.log file instead\r\n if (!$automationJobId) {\r\n Write-Output \"Unable to find JobID from PSPrivateMetadata\"\r\n if ($PWD -match \"HybridWorker\") {\r\n Write-Output \"Trying a workaround to find JobID as this is an HRW\"\r\n\r\n $parentPath = Split-Path -Parent $PSScriptRoot\r\n $fullPath = $parentPath + \"\\diags\\trace.log\"\r\n \r\n try {\r\n $automationJobId = ((Get-Content $fullPath -ErrorAction Stop | Select-String \"jobId\") -split 'jobId=')[1] -replace ']',''\r\n\r\n } catch {\r\n $automationJobId = $null\r\n }\r\n }\r\n }\r\n\r\n if ($automationJobId) {\r\n Write-Output \"JobID is $automationJobId\"\r\n # Get all Runbooks in the current subscription\r\n $allAutomationAccounts = Get-AzResource -ResourceType Microsoft.Automation/automationAccounts\r\n\r\n $automationAccountName = $null\r\n $resourceGroupName = $null\r\n $runbookName = $null\r\n\r\n foreach ($automationAccount in $allAutomationAccounts) {\r\n $runbookJobParams = @{\r\n \"AutomationAccountName\" = $automationAccount.Name\r\n \"ResourceGroupName\" = $automationAccount.ResourceGroupName\r\n \"Id\" = $automationJobId\r\n \"ErrorAction\" = \"SilentlyContinue\"\r\n }\r\n\r\n $runbookJob = Get-AzAutomationJob @runbookJobParams\r\n\r\n if (!([string]::IsNullOrEmpty($runbookJob))) {\r\n $automationAccountName = $runbookJob.AutomationAccountName\r\n $resourceGroupName = $runbookJob.ResourceGroupName\r\n $runbookName = $runbookJob.RunbookName\r\n }\r\n }\r\n\r\n # At this point I'll have the Automation Account Name, Runbook Name, Job ID and Resource Group Name, \r\n # Find all other active jobs of this Runbook.\r\n $runbookJobParams = @{\r\n \"AutomationAccountName\" = $automationAccountName\r\n \"ResourceGroupName\" = $resourceGroupName\r\n \"RunbookName\" = $runbookName\r\n \"ErrorAction\" = \"SilentlyContinue\"\r\n }\r\n\r\n $allActiveJobs = Get-AzAutomationJob @runbookJobParams | Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\") -or ($_.Status -eq \"Activating\") -or ($_.Status -eq \"Resuming\") }\r\n\r\n if ($allActiveJobs.Count -gt $numberOfInstances) {\r\n if ($quitRatherThanWait.IsPresent) {\r\n Write-Output \"Exiting as another job is already running\"\r\n exit\r\n \r\n } else {\r\n $oldestJob = $AllActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n \r\n # If this job is not the oldest created job we will wait until the existing jobs complete or the number of jobs is less than numberOfInstances\r\n while (($AutomationJobID -ne $oldestJob.JobId) -and ($allActiveJobs.Count -ge $numberOfInstances)) {\r\n Write-Output \"Waiting as there are currently running $($allActiveJobs.Count) active jobs for this runbook already. Sleeping 30 seconds...\"\r\n Write-Output \"Oldest Job is $($oldestJob.JobId)\"\r\n \r\n Start-Sleep -Seconds 30\r\n \r\n $allActiveJobs = Get-AzAutomationJob @runbookJobParams | Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\") -or ($_.Status -eq \"Activating\") -or ($_.Status -eq \"Resuming\") }\r\n $oldestJob = $allActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n } \r\n \r\n Write-Output \"Job can continue...\"\r\n }\r\n\r\n } else {\r\n Write-Output \"No other concurrent jobs found...\"\r\n }\r\n \r\n } else {\r\n Write-Warning \"Unable to find JobID. Poceeding with job, this might result in concurrent executions\"\r\n if ($PSVersionTable.PSVersion.Major -eq 7 -and $PWD -match \"HybridWorker\") {\r\n Write-Output \"This is PowerShell 7.x in HRW - that explains it!\"\r\n }\r\n }\r\n}\nGetting PnP PowerShell working with this\nOk, so what can I do to fix PnP PowerShell? Can’t I just unload the Az.Resources module after its done? Yes, I can (Remove-Module) but that doesn’t unload any of the loaded assemblies, and since those are usually the source of conflict Remove-Module can’t help us.\nWhat can I do regarding assemblies? In my previous post I had alluded to this very informative article from Microsoft. It suggests three ways to work around this issue:\n\nStart PowerShell as a sub-process – I didn’t try that, wasn’t sure if it would work\nUse the job system – this is what I tried\nUse PowerShell remoting – won’t work with Runbooks\n\nWith the job system you start the function as a separate job basically. And since it runs independent of the main script, the modules & assemblies it loads too are independent. When the job exits these are removed. Awesome!\nTyically the solution is simple:$result = Start-Job { Invoke-ConflictingCommand } | Receive-Job -WaitIn my case this is a function. How the heck do I get that in there? I could of course define the function within the Start-Job, but I don’t want that. I want to keep my code consistent across Runbooks. Thanks to a helpful StackOverflow post I learnt I can do the following:function FOO { \"HEY\" }\r\n\r\nStart-Job -ScriptBlock { \r\n\r\n # Redefine function FOO in the context of this job.\r\n $function:FOO = \"$using:function:FOO\" \r\n \r\n # Now FOO can be invoked.\r\n FOO\r\n\r\n} | Receive-Job -Wait -AutoRemoveJobSo all I have to do is:Start-Job {\r\n ${function:Throttle-AzRunbook} = \"${using:function:Throttle-AzRunbook}\"\r\n\r\n Throttle-AzRunbook -ScriptRoot $ScriptRoot\r\n\r\n} | Receive-Job -Wait -AutoRemoveJobI need to use the curly braces because of the dash in the name, else it complains.\nI didn’t know of this function name space. That’s useful.\nTwo issues with this.\nOne: my function complains that it can’t find PSScriptRoot any more. Apparently that’s how it is. So I modified the function to take this as an input parameter:function Throttle-AzRunbook {\r\n param(\r\n [Parameter(Mandatory=$false)]\r\n [switch]$quitRatherThanWait,\r\n\r\n [Parameter(Mandatory=$false)]\r\n [int]$numberOfInstances = 1,\r\n\r\n # If invoked from Start-Job pass the $PSScriptRoot as $ScriptRoot\r\n [Parameter(Mandatory=$false)]\r\n [string]$ScriptRoot\r\n )\r\n\r\n # Connect to Azure. With a Managed Identity in this case as that's what I use. \r\n # I could be already connected but I can't assume that within this function. \r\n # Must connect to Azure before running Get-AzAutomationJob or Get-AzResource\r\n # Note that this loads the Az.Resources module.\r\n try {\r\n # From https://docs.microsoft.com/en-us/azure/automation/enable-managed-identity-for-automation#authenticate-access-with-system-assigned-managed-identity\r\n # Ensures you do not inherit an AzContext in your runbook\r\n Disable-AzContextAutosave -Scope Process | Out-Null\r\n\r\n # Connect to Azure with system-assigned managed identity\r\n Connect-AzAccount -Identity | Out-Null\r\n\r\n } catch {\r\n Write-Error \"Runbook could not connect to Azure: $($_.Exception.Message)\"\r\n exit\r\n }\r\n\r\n Write-Output \"Checking whether to throttle...\"\r\n # Get the Job ID from PSPrivateMetadata. That's the only thing it contains!\r\n $automationJobId = $PSPrivateMetadata.JobId.Guid\r\n\r\n # A workaround for PowerShell 7.x and HRW where $PSPrivateMetadata is missing\r\n # I extract it from the trace.log file instead\r\n if (!$automationJobId) {\r\n Write-Output \"Unable to find JobID from PSPrivateMetadata\"\r\n if ($PWD -match \"HybridWorker\") {\r\n Write-Output \"Trying a workaround to find JobID as this is an HRW\"\r\n\r\n if ($ScriptRoot) {\r\n $parentPath = Split-Path -Parent $ScriptRoot\r\n\r\n } elseif ($PSScriptRoot) {\r\n $parentPath = Split-Path -Parent $PSScriptRoot\r\n\r\n } else {\r\n $parentPath = $null\r\n }\r\n\r\n if ($parentPath) {\r\n $fullPath = $parentPath + \"\\diags\\trace.log\"\r\n \r\n try {\r\n $automationJobId = ((Get-Content $fullPath -ErrorAction Stop | Select-String \"jobId\") -split 'jobId=')[1] -replace ']',''\r\n \r\n } catch {\r\n $automationJobId = $null\r\n }\r\n }\r\n }\r\n }\r\n\r\n if ($automationJobId) {\r\n Write-Output \"JobID is $automationJobId\"\r\n # Get all Runbooks in the current subscription\r\n $allAutomationAccounts = Get-AzResource -ResourceType Microsoft.Automation/automationAccounts\r\n\r\n $automationAccountName = $null\r\n $resourceGroupName = $null\r\n $runbookName = $null\r\n\r\n foreach ($automationAccount in $allAutomationAccounts) {\r\n $runbookJobParams = @{\r\n \"AutomationAccountName\" = $automationAccount.Name\r\n \"ResourceGroupName\" = $automationAccount.ResourceGroupName\r\n \"Id\" = $automationJobId\r\n \"ErrorAction\" = \"SilentlyContinue\"\r\n }\r\n\r\n $runbookJob = Get-AzAutomationJob @runbookJobParams\r\n\r\n if (!([string]::IsNullOrEmpty($runbookJob))) {\r\n $automationAccountName = $runbookJob.AutomationAccountName\r\n $resourceGroupName = $runbookJob.ResourceGroupName\r\n $runbookName = $runbookJob.RunbookName\r\n }\r\n }\r\n\r\n # At this point I'll have the Automation Account Name, Runbook Name, Job ID and Resource Group Name, \r\n # Find all other active jobs of this Runbook.\r\n $runbookJobParams = @{\r\n \"AutomationAccountName\" = $automationAccountName\r\n \"ResourceGroupName\" = $resourceGroupName\r\n \"RunbookName\" = $runbookName\r\n \"ErrorAction\" = \"SilentlyContinue\"\r\n }\r\n\r\n $allActiveJobs = Get-AzAutomationJob @runbookJobParams | Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\") -or ($_.Status -eq \"Activating\") -or ($_.Status -eq \"Resuming\") }\r\n\r\n if ($allActiveJobs.Count -gt $numberOfInstances) {\r\n if ($quitRatherThanWait.IsPresent) {\r\n Write-Output \"Exiting as another job is already running\"\r\n exit\r\n \r\n } else {\r\n $oldestJob = $AllActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n \r\n # If this job is not the oldest created job we will wait until the existing jobs complete or the number of jobs is less than numberOfInstances\r\n while (($AutomationJobID -ne $oldestJob.JobId) -and ($allActiveJobs.Count -ge $numberOfInstances)) {\r\n Write-Output \"Waiting as there are currently running $($allActiveJobs.Count) active jobs for this runbook already. Sleeping 30 seconds...\"\r\n Write-Output \"Oldest Job is $($oldestJob.JobId)\"\r\n \r\n Start-Sleep -Seconds 30\r\n \r\n $allActiveJobs = Get-AzAutomationJob @runbookJobParams | Where-Object { ($_.Status -eq \"Running\") -or ($_.Status -eq \"Starting\") -or ($_.Status -eq \"Queued\") -or ($_.Status -eq \"Activating\") -or ($_.Status -eq \"Resuming\") }\r\n $oldestJob = $allActiveJobs | Sort-Object -Property CreationTime | Select-Object -First 1\r\n } \r\n \r\n Write-Output \"Job can continue...\"\r\n }\r\n\r\n } else {\r\n Write-Output \"No other concurrent jobs found...\"\r\n }\r\n \r\n } else {\r\n Write-Warning \"Unable to find JobID. Poceeding with job, this might result in concurrent executions\"\r\n if ($PSVersionTable.PSVersion.Major -eq 7 -and $PWD -match \"HybridWorker\") {\r\n Write-Output \"This is PowerShell 7.x in HRW - that explains it!\"\r\n }\r\n }\r\n}And I will pass that as an input.\nThe second issue was that none of the Write-Output output from the function was appearing. I got it working by changing things a bit so here’s what my Start-Job looks like now (this includes the change to pass PSScriptRoot to the function; I make use of $using for that):$job = Start-Job {\r\n ${function:Throttle-AzRunbook} = \"${using:function:Throttle-AzRunbook}\"\r\n $ScriptRoot = $using:PSScriptRoot\r\n Throttle-AzRunbook -ScriptRoot $ScriptRoot\r\n}\r\n\r\nReceive-Job -Wait $jobFor some reason having Receive-Job separately got it to show the output.\nAnd that’s it! Now I have throttling working with PowerShell 7.2 and HRWs. I also hopefully know how to tackle any further conflicts between these various modules.", "date_published": "2023-09-29T11:46:12+01:00", "date_modified": "2023-09-29T11:48:54+01:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "exchangeonline", "modules", "pnp.powershell", "powershell", "runbook", "runbooks", "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7384", "url": "https://rakhesh.com/azure/could-not-load-file-or-assembly/", "title": "Could not load file or assembly", "content_html": "Continuing with my efforts to move all my Azure Automation Runbooks to PowerShell 7.2, yesterday I decided to tackle a couple of Runbooks that use the ExchangeOnlineManagement
module.
I installed PowerShell 7.2 on the Hybrid Runbook Worker (HRW) and installed the latest version of the ExchangeOnlineManagement
and Az
modules on it (this is side by side with the existing version in PowerShell 5.x as I detailed previously; long story short, install it via Install-Module
but with the -Force
switch).
Running Connect-ExchangeOnline
by itself worked fine in the Runbook, but if I use any Az
cmdlet first (because I need to access the Key Vault etc.) then Connect-ExchangeOnline
complains:
Could not load file or assembly 'Microsoft.Identity.Client, Version=4.41.0.0, Culture=neutral, PublicKeyToken=0a613f4dd989e8ae
Similar results if I login to the HRW and Import-Module Az
followed by Connect-ExchangeOnline
:
OperationStopped: Could not load file or assembly 'Microsoft.IdentityModel.Tokens, Version=6.22.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Could not find or load a specific\r\nfile. (0x80131621)
Slightly different assembly this time, but same stuff I guess.
\nInitially this forum post seemed like it should do the trick. Apparently I should in an admin PowerShell 7 window (something I missed initially) do the following:
# Replace the version with whatever the error message shows\r\nInstall-Module -Name Microsoft.Identity.Client -RequiredVersion 4.41.0.0
That didn’t work for me though, I kept getting the same error.
\nI learnt of this cmdlet to identify the loaded assemblies though:
[System.AppDomain]::CurrentDomain.GetAssemblies() | Where-Object Location | Sort-Object -Property FullName | Select-Object -Property FullName, Location, GlobalAssemblyCache, IsFullyTrusted | Out-GridView
This showed me that with the Microsoft.Identity.Client
assembly at least the file isn’t loaded. But this is probably a red-herring coz maybe ExchangeOnlineManagement
is failing before it reaches this stage, when I am trying this manually.
The other assembly is present though.
\n\nAnd as you can see it’s present both from the Az
module, as well as the ExchangeOnlineManagement
module – which is the cause of this conflict.
Turns out you can load assemblies manually, like this for instance:
Add-Type -Path 'C:\\Program Files\\PowerShell\\Modules\\ExchangeOnlineManagement\\3.3.0\\netCore\\System.IdentityModel.Tokens.Jwt.dll'
Or even:
Add-Type -AssemblyName 'Microsoft.IdentityModel.Tokens, Version=6.22.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'
I tried that, but it didn’t help. ExchangeOnlineManagement
continued complaining.
Hmm.
\nWhat exactly is an assembly? From this post:
\n\nAn assembly is a packaged chunk of functionality (the .NET equivalent of a DLL). Almost universally an assembly will consist of exactly one file (either a DLL or an EXE). To make things confusing, the naming convention for assemblies is very similar to the naming convention for namespaces. Be warned: they are not the same thing! An assembly may contain classes from many namespaces, and a namespace may cover many assemblies. Although not strictly correct, you can think of the assembly as the physical file containing the executable code and a namespace as a category to organize all the code that relates to a particular area.
\nNamespaces are just labels that are used to allow classes from different assemblies to have the same name and not interfere with each other. Much like classes form a container, or boundary, for their members (so many different classes can have an \u201cOpen()\u201d method, for example), namespaces form a container for classes with the same name. Native .NET namespaces start with \u201cSystem\u201d (this is the root of the .NET namespace universe). The convention for application-specific namespaces is to start them with {CompanyName}.{ProductName}. So, for example, you may see Microsoft.Office.{some more stuff} for Office-related classes.
Ok, so we are back to the DLL hell days.
\nI then found this good article about assemblies and conflicts.
\n\nIn .NET, dependency conflicts occur when two versions of the same assembly are loaded into the same Assembly Load Context. This term means slightly different things on different .NET platforms, which is covered later in this article. This conflict is a common problem that occurs in any software where versioned dependencies are used.
\nConflict issues are compounded by the fact that a project almost never deliberately or directly depends on two versions of the same dependency. Instead, the project has two or more dependencies that each require a different version of the same dependency.
Interestingly, turns out PowerShell’s own dependencies can conflict with the dependencies of its modules. I have encountered this in the past, for the specific example given in that post, but hadn’t realized. That article has some good suggestions too on what to do, but none of them apply to my situation of using HRWs… I think.
\nWhile all this was good info, eventually I was still stuck without a real solution.
\nI tried to remove all the Az
modules after they had done their work, and then run Connect-ExchangeOnline:
Get-Module | Where-Object { $_.Name -match \"^Az\" } | Remove-Module\r\nConnect-ExchangeOnline @connectParams
Failed.
\nSame if I remove the Az
modules and also ExchangeOnlineManagement
.
What if I load the ExchangeOnlineManagement
module first?
Import-Module ExchangeOnlineManagement\r\nImport-Module Az\r\n\r\n# Do the Azure stuff...\r\n\r\n# Then connect to ExO\r\nConnect-ExchangeOnline @connectParams
I have to load the Az
module before connecting, coz that’s how I get my certs from the Key Vault.
Nope, doesn’t work!
\nAny difference if I load the specific ones I need?
Import-Module ExchangeOnlineManagement\r\nImport-Module Az.Accounts\r\nImport-Module Az.KeyVault\r\n\r\n# Do the Azure stuff...\r\n\r\n# Then connect to ExO\r\nConnect-ExchangeOnline @connectParams
No way, that worked!!
\nBut that was when I tested on the HRW directly. What if I do this in the Runbook? Does it work?
\nIt actually did! Whee!
\nWeird thing is if I look at the loaded assemblies the version ExchangeOnlineManagement
wants isn’t even loaded:
Update: This also solves issues with PnP.PowerShell
. Even though I am not specifically loading it above, in my code I am doing Connect-PnPOnline
after Connect-ExchangeOnline
and it works fine.
Update (3rd Nov 2023): While reading Tony Redmond’s blog I see that he too encountered this last month. Like he said “…it\u2019s disappointing that two Microsoft engineering groups working in the Microsoft 365 ecosystem cannot agree on which version of a critical DLL to use.” Disappointing indeed.
\n", "content_text": "Continuing with my efforts to move all my Azure Automation Runbooks to PowerShell 7.2, yesterday I decided to tackle a couple of Runbooks that use the ExchangeOnlineManagement module.\nI installed PowerShell 7.2 on the Hybrid Runbook Worker (HRW) and installed the latest version of the ExchangeOnlineManagement and Az modules on it (this is side by side with the existing version in PowerShell 5.x as I detailed previously; long story short, install it via Install-Module but with the -Force switch).\nRunning Connect-ExchangeOnline by itself worked fine in the Runbook, but if I use any Az cmdlet first (because I need to access the Key Vault etc.) then Connect-ExchangeOnline complains:Could not load file or assembly 'Microsoft.Identity.Client, Version=4.41.0.0, Culture=neutral, PublicKeyToken=0a613f4dd989e8aeSimilar results if I login to the HRW and Import-Module Az followed by Connect-ExchangeOnline:OperationStopped: Could not load file or assembly 'Microsoft.IdentityModel.Tokens, Version=6.22.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. Could not find or load a specific\r\nfile. (0x80131621)Slightly different assembly this time, but same stuff I guess.\nInitially this forum post seemed like it should do the trick. Apparently I should in an admin PowerShell 7 window (something I missed initially) do the following:# Replace the version with whatever the error message shows\r\nInstall-Module -Name Microsoft.Identity.Client -RequiredVersion 4.41.0.0That didn’t work for me though, I kept getting the same error.\nI learnt of this cmdlet to identify the loaded assemblies though:[System.AppDomain]::CurrentDomain.GetAssemblies() | Where-Object Location | Sort-Object -Property FullName | Select-Object -Property FullName, Location, GlobalAssemblyCache, IsFullyTrusted | Out-GridViewThis showed me that with the Microsoft.Identity.Client assembly at least the file isn’t loaded. But this is probably a red-herring coz maybe ExchangeOnlineManagement is failing before it reaches this stage, when I am trying this manually.\n\nThe other assembly is present though.\n\nAnd as you can see it’s present both from the Az module, as well as the ExchangeOnlineManagement module – which is the cause of this conflict.\nTurns out you can load assemblies manually, like this for instance:Add-Type -Path 'C:\\Program Files\\PowerShell\\Modules\\ExchangeOnlineManagement\\3.3.0\\netCore\\System.IdentityModel.Tokens.Jwt.dll'Or even:Add-Type -AssemblyName 'Microsoft.IdentityModel.Tokens, Version=6.22.1.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'I tried that, but it didn’t help. ExchangeOnlineManagement continued complaining.\nHmm.\nWhat exactly is an assembly? From this post:\nAn assembly is a packaged chunk of functionality (the .NET equivalent of a DLL). Almost universally an assembly will consist of exactly one file (either a DLL or an EXE). To make things confusing, the naming convention for assemblies is very similar to the naming convention for namespaces. Be warned: they are not the same thing! An assembly may contain classes from many namespaces, and a namespace may cover many assemblies. Although not strictly correct, you can think of the assembly as the physical file containing the executable code and a namespace as a category to organize all the code that relates to a particular area.\nNamespaces are just labels that are used to allow classes from different assemblies to have the same name and not interfere with each other. Much like classes form a container, or boundary, for their members (so many different classes can have an \u201cOpen()\u201d method, for example), namespaces form a container for classes with the same name. Native .NET namespaces start with \u201cSystem\u201d (this is the root of the .NET namespace universe). The convention for application-specific namespaces is to start them with {CompanyName}.{ProductName}. So, for example, you may see Microsoft.Office.{some more stuff} for Office-related classes.\nOk, so we are back to the DLL hell days. \nI then found this good article about assemblies and conflicts.\nIn .NET, dependency conflicts occur when two versions of the same assembly are loaded into the same Assembly Load Context. This term means slightly different things on different .NET platforms, which is covered later in this article. This conflict is a common problem that occurs in any software where versioned dependencies are used.\nConflict issues are compounded by the fact that a project almost never deliberately or directly depends on two versions of the same dependency. Instead, the project has two or more dependencies that each require a different version of the same dependency.\nInterestingly, turns out PowerShell’s own dependencies can conflict with the dependencies of its modules. I have encountered this in the past, for the specific example given in that post, but hadn’t realized. That article has some good suggestions too on what to do, but none of them apply to my situation of using HRWs… I think.\nWhile all this was good info, eventually I was still stuck without a real solution.\nI tried to remove all the Az modules after they had done their work, and then run Connect-ExchangeOnline:Get-Module | Where-Object { $_.Name -match \"^Az\" } | Remove-Module\r\nConnect-ExchangeOnline @connectParamsFailed.\nSame if I remove the Az modules and also ExchangeOnlineManagement.\nWhat if I load the ExchangeOnlineManagement module first?Import-Module ExchangeOnlineManagement\r\nImport-Module Az\r\n\r\n# Do the Azure stuff...\r\n\r\n# Then connect to ExO\r\nConnect-ExchangeOnline @connectParamsI have to load the Az module before connecting, coz that’s how I get my certs from the Key Vault.\nNope, doesn’t work!\nAny difference if I load the specific ones I need?Import-Module ExchangeOnlineManagement\r\nImport-Module Az.Accounts\r\nImport-Module Az.KeyVault\r\n\r\n# Do the Azure stuff...\r\n\r\n# Then connect to ExO\r\nConnect-ExchangeOnline @connectParamsNo way, that worked!!\nBut that was when I tested on the HRW directly. What if I do this in the Runbook? Does it work?\nIt actually did! Whee! \nWeird thing is if I look at the loaded assemblies the version ExchangeOnlineManagement wants isn’t even loaded:\n\nUpdate: This also solves issues with PnP.PowerShell. Even though I am not specifically loading it above, in my code I am doing Connect-PnPOnline after Connect-ExchangeOnline and it works fine.\nUpdate (3rd Nov 2023): While reading Tony Redmond’s blog I see that he too encountered this last month. Like he said “…it\u2019s disappointing that two Microsoft engineering groups working in the Microsoft 365 ecosystem cannot agree on which version of a critical DLL to use.” Disappointing indeed.", "date_published": "2023-09-22T18:10:33+01:00", "date_modified": "2023-11-03T10:16:51+00:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "exchangeonline", "modules", "powershell", "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7377", "url": "https://rakhesh.com/gadgets/touch-id-keyboard-for-mac-mini/", "title": "Touch ID keyboard for Mac mini", "content_html": "Long time no posts, just been busy with work.
\nI came across this post by Jason Snell the other day. I think I was Googling for it – some way of having Touch ID with my Mac mini. I don’t want to use the Magic Keyboard (expensive, I don’t use my existing Magic Keyboard without Touch ID in the first place) but wanted to see if there was some way of getting Touch ID on the Mac mini nevertheless. Typically I unlock with my Apple Watch but that’s often unreliable.
\nAnyhow, that gave me the idea of just taping one to the bottom of my table. I still didn’t want to buy one, but now I could get a used one too coz all I cared about was the Touch ID. It was a different way of looking at the issue. Thankfully on eBay I was able to pick an “Opened by not used” Ukranian Magic Keyboard with Touch ID and Numeric Keypad for 40 quid – not bad! Got it today, ordered some velcro from Amazon yesterday evening, and now I have the Keyboard taped to the bottom of my desk.
\nI spent some time watching the YouTube video where Myke takes it apart to extract the Touch ID button. As well as this video by Snazzy Labs. In the end I was too chicken to take it apart. Not only would I have to buy the tools from iFixit (or use a blade etc. like in some other videos), I’d also have to print a case and make sure I don’t hurt myself (a very likely event; heck, while cutting the velcro to tape the keyboard I cut my finger with the scissors… if you can believe that!). If you are not into videos here’s instructions from the person who inspired Myke and Snazzy Labs.
\nAnyways, I now have Touch ID with my Mac mini and that’s all I need!
\nUpdate: To disable the keyboard from accidental touches I installed Karabiner Elements and:
\n\n", "content_text": "Long time no posts, just been busy with work.\nI came across this post by Jason Snell the other day. I think I was Googling for it – some way of having Touch ID with my Mac mini. I don’t want to use the Magic Keyboard (expensive, I don’t use my existing Magic Keyboard without Touch ID in the first place) but wanted to see if there was some way of getting Touch ID on the Mac mini nevertheless. Typically I unlock with my Apple Watch but that’s often unreliable.\nAnyhow, that gave me the idea of just taping one to the bottom of my table. I still didn’t want to buy one, but now I could get a used one too coz all I cared about was the Touch ID. It was a different way of looking at the issue. Thankfully on eBay I was able to pick an “Opened by not used” Ukranian Magic Keyboard with Touch ID and Numeric Keypad for 40 quid – not bad! Got it today, ordered some velcro from Amazon yesterday evening, and now I have the Keyboard taped to the bottom of my desk. \nI spent some time watching the YouTube video where Myke takes it apart to extract the Touch ID button. As well as this video by Snazzy Labs. In the end I was too chicken to take it apart. Not only would I have to buy the tools from iFixit (or use a blade etc. like in some other videos), I’d also have to print a case and make sure I don’t hurt myself (a very likely event; heck, while cutting the velcro to tape the keyboard I cut my finger with the scissors… if you can believe that!). If you are not into videos here’s instructions from the person who inspired Myke and Snazzy Labs.\nAnyways, I now have Touch ID with my Mac mini and that’s all I need!\nUpdate: To disable the keyboard from accidental touches I installed Karabiner Elements and:", "date_published": "2023-09-20T14:13:27+01:00", "date_modified": "2023-09-20T17:45:05+01:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "Apple", "mac mini", "Gadgets" ] }, { "id": "https://rakhesh.com/?p=7367", "url": "https://rakhesh.com/azure/notes-on-event-hubs/", "title": "Notes on Event Hubs", "content_html": "I had been using Event Hubs + Azure Functions pretty naively for the past few months. Mainly coz I just assumed how some of the things work, and also coz I guess when working with the cloud you have this mindset that things just work and don’t really care about the details.
\nAnyways.
\nThe first thing is that I have this Function that does some processing, and if it fails I was pushing the item to an event hub thus:
try {\r\n Push-OutputBinding -Name eventHubMessages -Value $body -ErrorAction Stop\r\n} catch {\r\n Write-Host \"=== Error pushing ===\"\r\n # do something about it...\r\n}
The expectation being that if the push fails I can output it and also do something like email me the item for instance. But this doesn’t work coz you can’t put the Push-OutputBinding
in a try/ catch
block. I never tested whether this works or not, and always assumed it does, until I was testing something this weekend and realized the exceptions when pushing weren’t being caught. That’s because all output bindings are executed after a Function exits and is done by the Function host/ worker, not the Function itself.
The way I encountered this was because I copy pasted some event hub bindings between two of my Functions without realizing I was copying the wrong code. The Function I was copying from had event hub triggers, while the Function I copied to had it as output and as you can see they have differences:
\nOutput binding | \nInput binding | \n
\n\"bindings\": [\r\n {\r\n \"name\": \"eventHubMessagesOut\",\r\n \"direction\": \"out\",\r\n \"type\": \"eventHub\",\r\n \"connection\": \"xxx_Function_EVENTHUB\",\r\n \"eventHubName\": \"yyyy\"\r\n }\r\n] \n | \n\n\"bindings\": [\r\n {\r\n \"type\": \"eventHubTrigger\",\r\n \"name\": \"eventHubMessages\",\r\n \"direction\": \"in\",\r\n \"eventHubName\": \"yyyy\",\r\n \"connection\": \"xxx_Function_EVENTHUB\",\r\n \"cardinality\": \"many\",\r\n \"consumerGroup\": \"$Default\"\r\n },\r\n] \n | \n
Because of this mismatch I was getting an error: No binding found for attribute 'Microsoft.Azure.WebJobs.EventHubTriggerAttribute'.
I couldn’t find out why this was so until I realized the mistake I made.
\nThe biggest thing I learnt though was about retries. To begin with, check out this link on how Azure Functions consumes Event Hubs. I am going to copy paste it here.
\nAzure Functions consumes Event Hub events while cycling through the following steps:
\nThis behavior reveals a few important points:
\nThe last two points are super important. Unless a Function crashes, if an Event Hub message is read and the Function doesn’t process it for some reason, it may not see it again. That is to say, if I read 10 messages from the Hub, process 6 and run into some error for the remaining 4 – which I may or may not catch via a try/ catch
block – the Event Hub & Function don’t care and I may not see those messages again. So it is up to me, the developer, to ensure I handle failed messages.
I sort of knew this, but I also assumed that if the Function runs into an exception then it magically knows to re-read those messages again from the Event Hub. My bad, of course!
\nHere, once again, we encounter the Function host/ worker. It doesn’t know what the Function is doing, which is why it doesn’t know to re-read the messages. The only signal it has is that of the Function succeeded or crashing, and it’s on that basis that it re-reads messages if needed.
\nThe second point is that a Function may read the same message more than once. Because, if the Function crashes like we said above, subsequent executions might read already processed messages. So I as the developer must expect this and do something to ensure I can handle duplicates.
\nEvery function must have try/ catch
blocks to handle messages that didn’t process.
Next, this article on checkpointing. Again, I’ll copy paste:
\nCheckpoints mark or commit reader positions in a partition event sequence. It’s the responsibility of the Functions host to checkpoint as events are processed and the setting for the batch checkpoint frequency is met. For more information about checkpointing, see Features and terminology in Azure Event Hubs.
\nThe following concepts can help you understand the relationship between checkpointing and the way that your function processes events:
\nUnderstanding checkpointing becomes critical when you consider best practices for error handling and retries, a topic that’s discussed later in this article.
\nThe first and last points we already know. But batchCheckpointFrequency
is something new. What is this setting?
Several configuration settings in the host.json file play a key role in the performance characteristics of the Event Hubs trigger binding for Functions:
\nDo several performance tests to determine the values to set for the trigger binding. We recommend that you change settings incrementally and measure consistently to fine-tune these options. The default values are a reasonable starting point for most event processing solutions.
\nThe default value of 1 means as each batch is processed a checkpoint is written. And the maxEventBatchSize
tells how many messages are pulled at most, each time. (There is no minimum amount, and also notice there is no setting that says how to get a Function to query an Event Hub for new messages – say when you are troubleshooting, or coz you Function crashed and now you want it to check for new messages. The only way to do that is to send something to the Event Hub, causing it to push to the Function).
Here’s some good info on how you can get duplicate messages.
\nMore later, I am still learning stuff.
\n", "content_text": "I had been using Event Hubs + Azure Functions pretty naively for the past few months. Mainly coz I just assumed how some of the things work, and also coz I guess when working with the cloud you have this mindset that things just work and don’t really care about the details.\nAnyways.\nThe first thing is that I have this Function that does some processing, and if it fails I was pushing the item to an event hub thus:try {\r\n Push-OutputBinding -Name eventHubMessages -Value $body -ErrorAction Stop\r\n} catch {\r\n Write-Host \"=== Error pushing ===\"\r\n # do something about it...\r\n}The expectation being that if the push fails I can output it and also do something like email me the item for instance. But this doesn’t work coz you can’t put the Push-OutputBinding in a try/ catch block. I never tested whether this works or not, and always assumed it does, until I was testing something this weekend and realized the exceptions when pushing weren’t being caught. That’s because all output bindings are executed after a Function exits and is done by the Function host/ worker, not the Function itself.\nThe way I encountered this was because I copy pasted some event hub bindings between two of my Functions without realizing I was copying the wrong code. The Function I was copying from had event hub triggers, while the Function I copied to had it as output and as you can see they have differences:\n\n\n\nOutput binding\nInput binding\n\n\n\n\"bindings\": [\r\n {\r\n \"name\": \"eventHubMessagesOut\",\r\n \"direction\": \"out\",\r\n \"type\": \"eventHub\",\r\n \"connection\": \"xxx_Function_EVENTHUB\",\r\n \"eventHubName\": \"yyyy\"\r\n }\r\n]\n \n\n\"bindings\": [\r\n {\r\n \"type\": \"eventHubTrigger\",\r\n \"name\": \"eventHubMessages\",\r\n \"direction\": \"in\",\r\n \"eventHubName\": \"yyyy\",\r\n \"connection\": \"xxx_Function_EVENTHUB\",\r\n \"cardinality\": \"many\",\r\n \"consumerGroup\": \"$Default\"\r\n },\r\n]\n \n\n\n\nBecause of this mismatch I was getting an error: No binding found for attribute 'Microsoft.Azure.WebJobs.EventHubTriggerAttribute'.\nI couldn’t find out why this was so until I realized the mistake I made.\nThe biggest thing I learnt though was about retries. To begin with, check out this link on how Azure Functions consumes Event Hubs. I am going to copy paste it here.\nAzure Functions consumes Event Hub events while cycling through the following steps:\n\nA pointer is created and persisted in Azure Storage for each partition of the event hub.\nWhen new messages are received (in a batch by default), the host attempts to trigger the function with the batch of messages.\nIf the function completes execution (with or without exception) the pointer advances and a checkpoint is saved to the storage account.\nIf conditions prevent the function execution from completing, the host fails to progress the pointer. If the pointer isn’t advanced, then later checks end up processing the same messages.\nRepeat steps 2\u20134\n\nThis behavior reveals a few important points:\n\nUnhandled exceptions may cause you to lose messages. Executions that result in an exception will continue to progress the pointer. Setting a retry policy will delay progressing the pointer until the entire retry policy has been evaluated.\nFunctions guarantees at-least-once delivery. Your code and dependent systems may need to account for the fact that the same message could be received twice.\n\nThe last two points are super important. Unless a Function crashes, if an Event Hub message is read and the Function doesn’t process it for some reason, it may not see it again. That is to say, if I read 10 messages from the Hub, process 6 and run into some error for the remaining 4 – which I may or may not catch via a try/ catch block – the Event Hub & Function don’t care and I may not see those messages again. So it is up to me, the developer, to ensure I handle failed messages.\nI sort of knew this, but I also assumed that if the Function runs into an exception then it magically knows to re-read those messages again from the Event Hub. My bad, of course!\nHere, once again, we encounter the Function host/ worker. It doesn’t know what the Function is doing, which is why it doesn’t know to re-read the messages. The only signal it has is that of the Function succeeded or crashing, and it’s on that basis that it re-reads messages if needed.\nThe second point is that a Function may read the same message more than once. Because, if the Function crashes like we said above, subsequent executions might read already processed messages. So I as the developer must expect this and do something to ensure I can handle duplicates.\nEvery function must have try/ catch blocks to handle messages that didn’t process.\nNext, this article on checkpointing. Again, I’ll copy paste:\nCheckpoints mark or commit reader positions in a partition event sequence. It’s the responsibility of the Functions host to checkpoint as events are processed and the setting for the batch checkpoint frequency is met. For more information about checkpointing, see Features and terminology in Azure Event Hubs.\nThe following concepts can help you understand the relationship between checkpointing and the way that your function processes events:\n\nExceptions still count towards success: If the function process doesn’t crash while processing events, the completion of the function is considered successful, even if exceptions occurred. When the function completes, the Functions host evaluates batchCheckpointFrequency. If it’s time for a checkpoint, it creates one, regardless of whether there were exceptions. The fact that exceptions don’t affect checkpointing shouldn’t affect your proper use of exception checking and handling.\nBatch frequency matters: In high-volume event streaming solutions, it can be beneficial to change the batchCheckpointFrequency setting to a value greater than 1. Increasing this value can reduce the rate of checkpoint creation and, as a consequence, the number of storage I/O operations.\nReplays can happen: Each time a function is invoked with the Event Hubs trigger binding, it uses the most recent checkpoint to determine where to resume processing. The offset for every consumer is saved at the partition level for each consumer group. Replays happen when a checkpoint doesn’t occur during the last invocation of the function, and the function is invoked again. For more information about duplicates and deduplication techniques, see Idempotency.\n\nUnderstanding checkpointing becomes critical when you consider best practices for error handling and retries, a topic that’s discussed later in this article.\nThe first and last points we already know. But batchCheckpointFrequency is something new. What is this setting?\nSeveral configuration settings in the host.json file play a key role in the performance characteristics of the Event Hubs trigger binding for Functions:\n\nmaxEventBatchSize: This setting represents the maximum number of events that the function can receive when it’s invoked. If the number of events received is less than this amount, the function is still invoked with as many events as are available. You can’t set a minimum batch size.\nprefetchCount: The prefetch count is one of the most important settings when you optimize for performance. The underlying AMQP channel references this value to determine how many messages to fetch and cache for the client. The prefetch count should be greater than or equal to the maxEventBatchSize value and is commonly set to a multiple of that amount. Setting this value to a number less than the maxEventBatchSize setting can hurt performance.\nbatchCheckpointFrequency: As your function processes batches, this value determines the rate at which checkpoints are created. The default value is 1, which means that there’s a checkpoint whenever a function successfully processes a batch. A checkpoint is created at the partition level for each reader in the consumer group. For information about how this setting influences replays and retries of events, see Event hub triggered Azure function: Replays and Retries (blog post).\n\nDo several performance tests to determine the values to set for the trigger binding. We recommend that you change settings incrementally and measure consistently to fine-tune these options. The default values are a reasonable starting point for most event processing solutions.\nThe default value of 1 means as each batch is processed a checkpoint is written. And the maxEventBatchSize tells how many messages are pulled at most, each time. (There is no minimum amount, and also notice there is no setting that says how to get a Function to query an Event Hub for new messages – say when you are troubleshooting, or coz you Function crashed and now you want it to check for new messages. The only way to do that is to send something to the Event Hub, causing it to push to the Function).\nHere’s some good info on how you can get duplicate messages.\nMore later, I am still learning stuff.", "date_published": "2023-08-13T23:23:58+01:00", "date_modified": "2023-08-13T23:23:58+01:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "event hubs", "functions", "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7364", "url": "https://rakhesh.com/coding/using-hammerspoon-to-switch-apps-part-2/", "title": "Using Hammerspoon to switch apps (part 2)", "content_html": "A while ago I had posted how I use Hammerspoon to switch apps.
\nEssentially I have a list of shortcuts defined like this:
ctrlCmdShortcuts = {\r\n {\"A\", \"The Archive\"},\r\n {\"C\", \"Visual Studio Code; Calendar\"},\r\n {\"F\", \"Firefox; Finder\"},\r\n {\"T\", \"Things3\"},\r\n {\"O\", \"Microsoft Outlook\"},\r\n {\"W\", \"WorkFlowy; Microsoft Word\"},\r\n}
And I can press Ctrl+Cmd+C
to switch to Visual Studio Code, Ctrl+Cmd+F
to switch to Firefox, and so on. That’s how I began things, but as I detailed in that post I then extended this to switch between apps. Thus, in the above example, the first time I press Ctrl+Cmd+W
I switch to Workflowy, but if I am already on Workflowy and I press these keys it will take me to Microsoft Word. Which is so neat coz I have an app switcher of sorts that just switches between these apps.
Moreover, if there are multiple windows it will switch between these. So Ctrl+Cmd+O
will take me to Outlook, press it again and it will either do nothing, or if there’s another window it will switch to that. Press again, and if there’s yet another window it will take me that, else take me back to the first window. Very neat!
There is a catch in that if there are more than one window of an application, and I have defined a second one too for that key, it won’t switch to that second application. So Ctrl+Cmd+C
will take me to Visual Studio Code, pressing again will take me to the second window if it exists, pressing again will take me back to the first window (assuming only two windows). I won’t ever go to Calendar until I have just one window of Visual Studio Code.
The keys can also launch an application if it’s not open. For instance, press Ctrl+Cmd+O
and if Outlook is not open it will launch and switch to it. :) This behaviour is what I now wanted to fine tune. I came across this blog post by Christian Sellig where here uses Hammerspoon to switch between XCode windows and if XCode isn’t already launched it won’t open it. That’s a good idea, but I wanted to take it one step further and have it as an optional thing.
That’s to say, with things like Outlook which are work related, I don’t want to press Ctrl+Cmd+O
on a weekend and suddenly be faced with work emails; but I am ok with Ctrl+Cmd+F
launching Firefox if it isn’t running.
So I came up with this variant:
ctrlCmdShortcuts = {\r\n {\"A\", \"The Archive\"},\r\n {\"C\", \"Visual Studio Code; Calendar*\"},\r\n {\"F\", \"Firefox; Finder\"},\r\n {\"T\", \"Things3\"},\r\n {\"O\", \"Microsoft Outlook*\"},\r\n {\"W\", \"WorkFlowy; Microsoft Word*\"},\r\n}
If a program has an asterisk next to it, don’t launch it if it isn’t already running. Else feel free to lauch it.
\nTo achieve this I had to modify my previous script a bit.
-- Takes a list of apps (appList) and appName and separator (defaults to ;)\r\n-- Tells me what app to launch. Answer could be appName itself.\r\nlocal function getAppToLaunchFromList(appList, appName, separator)\r\n -- If no separator is specified assume it is a semi-colon\r\n if separator == nil then\r\n separator = ';'\r\n end\r\n\r\n local position = 0\r\n local counter = 1\r\n local tokens = {}\r\n\r\n -- the ..xxx.. notation is how you do string interpolation (i.e. put a variable in a string)\r\n -- so we are have a regex [^xxx]+... which means any character that is not one or more instances of xxx\r\n --[[\r\n appList = \"Microsoft Outlook; Microsoft Word\"\r\n separator = \";\"\r\n for str in string.gmatch(appList, \"([^\"..separator..\"]+)\") do\r\n print(str)\r\n sanitizedAppName = (string.gsub(str, '^%s+', ''))\r\n print(sanitizedAppName)\r\n end\r\n\r\n output:\r\n Microsoft Outlook\r\n Microsoft Outlook\r\n Microsoft Word\r\n Microsoft Word\r\n\r\n ]]--\r\n -- notice it includes the space; so we remove that too later\r\n for str in string.gmatch(appList, \"([^\"..separator..\"]+)\") do\r\n -- Sanitize the name by removing any spaces before the name... coz you would enter \"abc; def\" but the app name is actually \"def\"\r\n -- I must put the whole thing in brackets coz else the output is the replace string followed by the number of times a replacement was made\r\n -- https://www.lua.org/manual/5.4/manual.html#3.4.12\r\n sanitizedAppName = (string.gsub(str, '^%s+', ''))\r\n\r\n table.insert(tokens, sanitizedAppName)\r\n if sanitizedAppName == appName then\r\n -- If we match the app name set the position to that\r\n position = counter\r\n else\r\n -- Else keep incrementing the counter until the end\r\n counter = counter + 1\r\n end\r\n end\r\n\r\n -- If position is 0 it means we didn't find anything\r\n if position == 0 then\r\n return nil\r\n else\r\n if position == #tokens then\r\n return tokens[1]\r\n else\r\n return tokens[position+1]\r\n end\r\n end\r\nend\r\n\r\n-- Returns the first app in the list of apps\r\nlocal function getFirstAppFromList(appList, separator)\r\n -- If no separator is specified assume it is a semi-colon\r\n if separator == nil then\r\n separator = ';'\r\n end\r\n\r\n -- Check if the appList has the separator; if not we know it's a single entry\r\n if string.find(appList, \"([^\"..separator..\"]+)\") then\r\n -- Replace ; followed by whatever with nothing\r\n -- Got to enclose the whole thing in () for reasons I mention in the other function\r\n return (string.gsub(appList, \";.*\", ''))\r\n else\r\n return appList\r\n end\r\nend\r\n\r\n-- Launch, Focus or Rotate application\r\n-- From https://apple.stackexchange.com/a/455010\r\n-- Modified by me\r\nlocal function launchOrFocusOrRotate(appList)\r\n -- Get the first app from the list\r\n local appFromList = getFirstAppFromList(appList)\r\n\r\n -- thanks http://lua-users.org/wiki/PatternsTutorial\r\n local app\r\n if string.match(appFromList,'*$') then\r\n -- check if an app is already running. the app name is the name we got from the list, with the * removed\r\n app = string.gsub(appFromList,\"*\",\"\")\r\n local appFind = hs.application.find(app)\r\n if appFind == nil then\r\n -- thanks http://lua-users.org/wiki/StringInterpolation for how to include variable in string\r\n local message = string.format(\" %s is not open\", app)\r\n hs.notify.new({\r\n title = message, \r\n informativeText = \"Manually launch \" ..app.. \" and then try if you want to switch to that\"}):send()\r\n return\r\n end\r\n else\r\n app = appFromList\r\n end\r\n\r\n local focusedWindow = hs.window.focusedWindow()\r\n -- Output of the above is an hs.window object\r\n\r\n -- I can get the application it belongs to via the :application() method\r\n -- See https://www.hammerspoon.org/docs/hs.window.html#application \r\n local focusedWindowApp = focusedWindow:application()\r\n -- This returns an hs.application object\r\n\r\n -- Get the name of this application; this isn't really useful for us as launchOrFocus needs the app name on disk\r\n -- I do use it below, further on...\r\n local focusedWindowAppName = focusedWindowApp:name()\r\n\r\n -- This gives the path - /Applications/<application>.app\r\n local focusedWindowPath = focusedWindowApp:path()\r\n\r\n -- I need to extract <application> from that\r\n local appNameOnDisk = string.gsub(focusedWindowPath,\"/Applications/\", \"\")\r\n local appNameOnDisk = string.gsub(appNameOnDisk,\".app\", \"\")\r\n -- Finder has this as its path\r\n local appNameOnDisk = string.gsub(appNameOnDisk,\"/System/Library/CoreServices/\",\"\")\r\n\r\n -- If already focused, try to find the next window\r\n if focusedWindow and appNameOnDisk == app then\r\n -- hs.application.get needs the name as per hs.application:name() and not the name on disk\r\n -- It can also take pid or bundle, but that doesn't help here\r\n -- Since I have the name already from above, I can use that though\r\n local appWindows = hs.application.get(focusedWindowAppName):allWindows()\r\n\r\n -- https://www.hammerspoon.org/docs/hs.application.html#allWindows\r\n -- A table of zero or more hs.window objects owned by the application. From the current space. \r\n\r\n -- Does the app have more than 1 window, if so switch between them\r\n if #appWindows > 1 then\r\n -- It seems that this list order changes after one window get focused, \r\n -- Let's directly bring the last one to focus every time\r\n -- https://www.hammerspoon.org/docs/hs.window.html#focus\r\n if app == \"Finder\" then\r\n -- If the app is Finder the window count returned is one more than the actual count, so I subtract\r\n appWindows[#appWindows-1]:focus()\r\n else\r\n appWindows[#appWindows]:focus()\r\n end\r\n else\r\n -- The app doesn't have more than one window, but we are focussed on it and still pressing the key\r\n -- So let's switch to any other app in that list if present\r\n appFromList = getAppToLaunchFromList(appList, app)\r\n\r\n -- thanks http://lua-users.org/wiki/PatternsTutorial\r\n local app\r\n if string.match(appFromList,'*$') then\r\n -- check if an app is already running. the app name is the name we got from the list, with the * removed\r\n app = string.gsub(appFromList,\"*\",\"\")\r\n local appFind = hs.application.find(app)\r\n if appFind == nil then\r\n -- thanks http://lua-users.org/wiki/StringInterpolation for how to include variable in string\r\n local message = string.format(\" %s is not open\", app)\r\n hs.notify.new({\r\n title = message, \r\n informativeText = \"Manually launch \" ..app.. \" and then try if you want to switch to that\"}):send()\r\n return\r\n end\r\n else\r\n app = appFromList\r\n end\r\n\r\n hs.application.launchOrFocus(app)\r\n \r\n -- Finder needs special treatment\r\n -- From https://zhiye.li/hammerspoon-use-the-keyboard-shortcuts-to-launch-apps-a7c59ab3d92\r\n if app == 'Finder' then\r\n hs.appfinder.appFromName(app):activate()\r\n end\r\n end\r\n else -- if not focused\r\n hs.application.launchOrFocus(app)\r\n -- Finder needs special treatment\r\n -- From https://zhiye.li/hammerspoon-use-the-keyboard-shortcuts-to-launch-apps-a7c59ab3d92\r\n if app == 'Finder' then\r\n hs.appfinder.appFromName(app):activate()\r\n end\r\n end\r\nend
I’ve highlighted the parts I added. But I changed the other code too slightly so there will be some difference to what I put in the previous post. Basically I added code to check for the existence of the asterisk and accordingly not do anything.
\nI love programming in Lua. I haven’t done much except with Hammerspoon, but it’s very neat in a way and I like it.
\n\n", "content_text": "A while ago I had posted how I use Hammerspoon to switch apps.\nEssentially I have a list of shortcuts defined like this:ctrlCmdShortcuts = {\r\n {\"A\", \"The Archive\"},\r\n {\"C\", \"Visual Studio Code; Calendar\"},\r\n {\"F\", \"Firefox; Finder\"},\r\n {\"T\", \"Things3\"},\r\n {\"O\", \"Microsoft Outlook\"},\r\n {\"W\", \"WorkFlowy; Microsoft Word\"},\r\n}And I can press Ctrl+Cmd+C to switch to Visual Studio Code, Ctrl+Cmd+F to switch to Firefox, and so on. That’s how I began things, but as I detailed in that post I then extended this to switch between apps. Thus, in the above example, the first time I press Ctrl+Cmd+W I switch to Workflowy, but if I am already on Workflowy and I press these keys it will take me to Microsoft Word. Which is so neat coz I have an app switcher of sorts that just switches between these apps.\nMoreover, if there are multiple windows it will switch between these. So Ctrl+Cmd+O will take me to Outlook, press it again and it will either do nothing, or if there’s another window it will switch to that. Press again, and if there’s yet another window it will take me that, else take me back to the first window. Very neat!\nThere is a catch in that if there are more than one window of an application, and I have defined a second one too for that key, it won’t switch to that second application. So Ctrl+Cmd+C will take me to Visual Studio Code, pressing again will take me to the second window if it exists, pressing again will take me back to the first window (assuming only two windows). I won’t ever go to Calendar until I have just one window of Visual Studio Code.\nThe keys can also launch an application if it’s not open. For instance, press Ctrl+Cmd+O and if Outlook is not open it will launch and switch to it. :) This behaviour is what I now wanted to fine tune. I came across this blog post by Christian Sellig where here uses Hammerspoon to switch between XCode windows and if XCode isn’t already launched it won’t open it. That’s a good idea, but I wanted to take it one step further and have it as an optional thing.\nThat’s to say, with things like Outlook which are work related, I don’t want to press Ctrl+Cmd+O on a weekend and suddenly be faced with work emails; but I am ok with Ctrl+Cmd+F launching Firefox if it isn’t running.\nSo I came up with this variant:ctrlCmdShortcuts = {\r\n {\"A\", \"The Archive\"},\r\n {\"C\", \"Visual Studio Code; Calendar*\"},\r\n {\"F\", \"Firefox; Finder\"},\r\n {\"T\", \"Things3\"},\r\n {\"O\", \"Microsoft Outlook*\"},\r\n {\"W\", \"WorkFlowy; Microsoft Word*\"},\r\n}If a program has an asterisk next to it, don’t launch it if it isn’t already running. Else feel free to lauch it.\nTo achieve this I had to modify my previous script a bit.-- Takes a list of apps (appList) and appName and separator (defaults to ;)\r\n-- Tells me what app to launch. Answer could be appName itself.\r\nlocal function getAppToLaunchFromList(appList, appName, separator)\r\n -- If no separator is specified assume it is a semi-colon\r\n if separator == nil then\r\n separator = ';'\r\n end\r\n\r\n local position = 0\r\n local counter = 1\r\n local tokens = {}\r\n\r\n -- the ..xxx.. notation is how you do string interpolation (i.e. put a variable in a string)\r\n -- so we are have a regex [^xxx]+... which means any character that is not one or more instances of xxx\r\n --[[\r\n appList = \"Microsoft Outlook; Microsoft Word\"\r\n separator = \";\"\r\n for str in string.gmatch(appList, \"([^\"..separator..\"]+)\") do\r\n print(str)\r\n sanitizedAppName = (string.gsub(str, '^%s+', ''))\r\n print(sanitizedAppName)\r\n end\r\n\r\n output:\r\n Microsoft Outlook\r\n Microsoft Outlook\r\n Microsoft Word\r\n Microsoft Word\r\n\r\n ]]--\r\n -- notice it includes the space; so we remove that too later\r\n for str in string.gmatch(appList, \"([^\"..separator..\"]+)\") do\r\n -- Sanitize the name by removing any spaces before the name... coz you would enter \"abc; def\" but the app name is actually \"def\"\r\n -- I must put the whole thing in brackets coz else the output is the replace string followed by the number of times a replacement was made\r\n -- https://www.lua.org/manual/5.4/manual.html#3.4.12\r\n sanitizedAppName = (string.gsub(str, '^%s+', ''))\r\n\r\n table.insert(tokens, sanitizedAppName)\r\n if sanitizedAppName == appName then\r\n -- If we match the app name set the position to that\r\n position = counter\r\n else\r\n -- Else keep incrementing the counter until the end\r\n counter = counter + 1\r\n end\r\n end\r\n\r\n -- If position is 0 it means we didn't find anything\r\n if position == 0 then\r\n return nil\r\n else\r\n if position == #tokens then\r\n return tokens[1]\r\n else\r\n return tokens[position+1]\r\n end\r\n end\r\nend\r\n\r\n-- Returns the first app in the list of apps\r\nlocal function getFirstAppFromList(appList, separator)\r\n -- If no separator is specified assume it is a semi-colon\r\n if separator == nil then\r\n separator = ';'\r\n end\r\n\r\n -- Check if the appList has the separator; if not we know it's a single entry\r\n if string.find(appList, \"([^\"..separator..\"]+)\") then\r\n -- Replace ; followed by whatever with nothing\r\n -- Got to enclose the whole thing in () for reasons I mention in the other function\r\n return (string.gsub(appList, \";.*\", ''))\r\n else\r\n return appList\r\n end\r\nend\r\n\r\n-- Launch, Focus or Rotate application\r\n-- From https://apple.stackexchange.com/a/455010\r\n-- Modified by me\r\nlocal function launchOrFocusOrRotate(appList)\r\n -- Get the first app from the list\r\n local appFromList = getFirstAppFromList(appList)\r\n\r\n -- thanks http://lua-users.org/wiki/PatternsTutorial\r\n local app\r\n if string.match(appFromList,'*$') then\r\n -- check if an app is already running. the app name is the name we got from the list, with the * removed\r\n app = string.gsub(appFromList,\"*\",\"\")\r\n local appFind = hs.application.find(app)\r\n if appFind == nil then\r\n -- thanks http://lua-users.org/wiki/StringInterpolation for how to include variable in string\r\n local message = string.format(\" %s is not open\", app)\r\n hs.notify.new({\r\n title = message, \r\n informativeText = \"Manually launch \" ..app.. \" and then try if you want to switch to that\"}):send()\r\n return\r\n end\r\n else\r\n app = appFromList\r\n end\r\n\r\n local focusedWindow = hs.window.focusedWindow()\r\n -- Output of the above is an hs.window object\r\n\r\n -- I can get the application it belongs to via the :application() method\r\n -- See https://www.hammerspoon.org/docs/hs.window.html#application \r\n local focusedWindowApp = focusedWindow:application()\r\n -- This returns an hs.application object\r\n\r\n -- Get the name of this application; this isn't really useful for us as launchOrFocus needs the app name on disk\r\n -- I do use it below, further on...\r\n local focusedWindowAppName = focusedWindowApp:name()\r\n\r\n -- This gives the path - /Applications/<application>.app\r\n local focusedWindowPath = focusedWindowApp:path()\r\n\r\n -- I need to extract <application> from that\r\n local appNameOnDisk = string.gsub(focusedWindowPath,\"/Applications/\", \"\")\r\n local appNameOnDisk = string.gsub(appNameOnDisk,\".app\", \"\")\r\n -- Finder has this as its path\r\n local appNameOnDisk = string.gsub(appNameOnDisk,\"/System/Library/CoreServices/\",\"\")\r\n\r\n -- If already focused, try to find the next window\r\n if focusedWindow and appNameOnDisk == app then\r\n -- hs.application.get needs the name as per hs.application:name() and not the name on disk\r\n -- It can also take pid or bundle, but that doesn't help here\r\n -- Since I have the name already from above, I can use that though\r\n local appWindows = hs.application.get(focusedWindowAppName):allWindows()\r\n\r\n -- https://www.hammerspoon.org/docs/hs.application.html#allWindows\r\n -- A table of zero or more hs.window objects owned by the application. From the current space. \r\n\r\n -- Does the app have more than 1 window, if so switch between them\r\n if #appWindows > 1 then\r\n -- It seems that this list order changes after one window get focused, \r\n -- Let's directly bring the last one to focus every time\r\n -- https://www.hammerspoon.org/docs/hs.window.html#focus\r\n if app == \"Finder\" then\r\n -- If the app is Finder the window count returned is one more than the actual count, so I subtract\r\n appWindows[#appWindows-1]:focus()\r\n else\r\n appWindows[#appWindows]:focus()\r\n end\r\n else\r\n -- The app doesn't have more than one window, but we are focussed on it and still pressing the key\r\n -- So let's switch to any other app in that list if present\r\n appFromList = getAppToLaunchFromList(appList, app)\r\n\r\n -- thanks http://lua-users.org/wiki/PatternsTutorial\r\n local app\r\n if string.match(appFromList,'*$') then\r\n -- check if an app is already running. the app name is the name we got from the list, with the * removed\r\n app = string.gsub(appFromList,\"*\",\"\")\r\n local appFind = hs.application.find(app)\r\n if appFind == nil then\r\n -- thanks http://lua-users.org/wiki/StringInterpolation for how to include variable in string\r\n local message = string.format(\" %s is not open\", app)\r\n hs.notify.new({\r\n title = message, \r\n informativeText = \"Manually launch \" ..app.. \" and then try if you want to switch to that\"}):send()\r\n return\r\n end\r\n else\r\n app = appFromList\r\n end\r\n\r\n hs.application.launchOrFocus(app)\r\n \r\n -- Finder needs special treatment\r\n -- From https://zhiye.li/hammerspoon-use-the-keyboard-shortcuts-to-launch-apps-a7c59ab3d92\r\n if app == 'Finder' then\r\n hs.appfinder.appFromName(app):activate()\r\n end\r\n end\r\n else -- if not focused\r\n hs.application.launchOrFocus(app)\r\n -- Finder needs special treatment\r\n -- From https://zhiye.li/hammerspoon-use-the-keyboard-shortcuts-to-launch-apps-a7c59ab3d92\r\n if app == 'Finder' then\r\n hs.appfinder.appFromName(app):activate()\r\n end\r\n end\r\nendI’ve highlighted the parts I added. But I changed the other code too slightly so there will be some difference to what I put in the previous post. Basically I added code to check for the existence of the asterisk and accordingly not do anything.\nI love programming in Lua. I haven’t done much except with Hammerspoon, but it’s very neat in a way and I like it.\n ", "date_published": "2023-08-06T19:12:22+01:00", "date_modified": "2023-08-06T19:12:22+01:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "hammerspoon", "lua", "Coding", "Mac" ] }, { "id": "https://rakhesh.com/?p=7356", "url": "https://rakhesh.com/azure/fun-with-pnp-powershell/", "title": "Fun with PnP PowerShell", "content_html": "
I had to do something with SharePoint lists today and had some fun with PnP PowerShell in the process. Here’s what I had to do. (What follows is a made up example).
\nI have a bunch of lists that contain the prices of an item per country. For instance, there’s 6 lists for 6 items.
\n\nHere’s what one of the lists looks like:
\n\nAll 6 lists have the same contries, and the price of the item in that country.
\nWhat I wanted to do was have country specific lists that showed the price of the 6 items for that country. Initially I thought I might be able to do a lookup or something between lists, but that doesn’t work. Instead I decided to create new lists that are populated via PnP PowerShell.
\nTo begin with, let’s get the names of the items and the countries. I do the first by finding all lists with “Item xx Prices” as the name. And for the latter I simply read all the Title field values from one of these lists.
$itemNames = (Get-PnpList | Where-Object { $_.Title -match \"^Item.*Prices$\" }).Title\r\n\r\n$countryNames = (Get-PnPListItem -List \"Item 1 Prices\").FieldValues.Title
Then I’d want to process each country. Check if there’s a list with that country name already, and if not create it. Then check if the list (new or existing) has a field called Price, and if not create it. And lastly go through each of the item lists, find the price of the item for that country, and add these as rows to the newly created list.
\nI did it this way (the snippet includes the two lookups I showed above too).
$itemNames = (Get-PnpList | Where-Object { $_.Title -match \"^Item.*Prices$\" }).Title\r\n\r\n$countryNames = (Get-PnPListItem -List \"Item 1 Prices\").FieldValues.Title\r\n\r\nforeach ($country in $countryNames) {\r\n $newListName = \"Country: $country\"\r\n\r\n Write-Progress \"Processing $newListName\"\r\n\r\n try {\r\n Get-PnPList -Identity \"$newListName\" -ErrorAction Stop | Out-Null\r\n Write-Progress \"List exists $newListName\"\r\n } catch {\r\n if ($_.Exception.Message -match \"does not exist at site\") {\r\n Write-Progress \"Creating new list $newListName\"\r\n try {\r\n New-PnPList -Title \"$newListName\" -Template GenericList -ErrorAction Stop | Out-Null\r\n Write-Progress \"Created new list $newListName\"\r\n Start-Sleep -Seconds 5\r\n } catch {\r\n Write-Warning \"Something went wrong. Skipping.\"\r\n continue\r\n }\r\n }\r\n }\r\n\r\n $newfield = \"Price\"\r\n if ((Get-PnPField -List \"$newListName\").Title -notcontains \"$newfield\") {\r\n Write-Progress -Id 1 \"Adding the $newfield field\"\r\n try {\r\n Add-PnPField -List \"$newListName\" -DisplayName \"$newfield\" -InternalName \"$newfield\" -Type \"Note\" -AddToDefaultView -ErrorAction Stop | Out-Null\r\n } catch {\r\n Write-Warning \"Error adding the $newfield field. Skipping.\"\r\n continue\r\n }\r\n }\r\n\r\n foreach ($item in $itemNames) {\r\n Write-Progress -Id 1 \"Processing item $item\"\r\n # find the field used for that country \r\n try {\r\n $result = Get-PnPListItem -List \"$item\" -ErrorAction Stop | Where-Object { $_.FieldValues.Title -eq \"$country\" } \r\n } catch {\r\n Write-Warning \"Error getting field. Skipping.\"\r\n continue\r\n }\r\n\r\n $values = @{\r\n \"Title\" = \"$item\"\r\n \"Price\" = if ($result.FieldValues.Keys -contains \"Price\") { $result.FieldValues.Price }\r\n }\r\n\r\n # The query is a CAML query - https://pnp.github.io/powershell/cmdlets/Get-PnPListItem.html#example-6 \r\n try {\r\n Write-Progress -Id 2 \"Looking up item $item in list $newListName\"\r\n $listItem = Get-PnpListItem -List \"$newListName\" -ErrorAction Stop `\r\n -Query \"<View><Query><Where><Eq><FieldRef Name='Title'/><Value Type='Text'>$item</Value></Eq></Where></Query></View>\"\r\n\r\n } catch {\r\n Write-Warning \"Error getting existing item. Skipping.\"\r\n continue\r\n }\r\n\r\n if ($listItem) {\r\n Write-Progress -Id 2 \"Updating existing item\"\r\n try {\r\n Set-PnPListItem -List \"$newListName\" -Values $values -Identity $listItem -ErrorAction Stop | Out-Null\r\n } catch {\r\n Write-Warning \"Error updating $field in $newListName\"\r\n }\r\n } else {\r\n Write-Progress -Id 2 \"Creating new item\"\r\n try {\r\n Add-PnPListItem -List \"$newListName\" -Values $values -ErrorAction Stop | Out-Null\r\n } catch {\r\n Write-Warning \"Error adding $field to $newListName\"\r\n }\r\n }\r\n }\r\n}
Now I can have this running periodically (I suppose I could look into triggering this whenever there’s a change to the item lists) and have the country lists up to date. Nice, huh!
\nHere’s an example country list:
\n\nAnd here’s all the automatically created lists, with 6 items each:
\n\n", "content_text": "I had to do something with SharePoint lists today and had some fun with PnP PowerShell in the process. Here’s what I had to do. (What follows is a made up example).\nI have a bunch of lists that contain the prices of an item per country. For instance, there’s 6 lists for 6 items.\n\nHere’s what one of the lists looks like:\n\nAll 6 lists have the same contries, and the price of the item in that country.\nWhat I wanted to do was have country specific lists that showed the price of the 6 items for that country. Initially I thought I might be able to do a lookup or something between lists, but that doesn’t work. Instead I decided to create new lists that are populated via PnP PowerShell.\nTo begin with, let’s get the names of the items and the countries. I do the first by finding all lists with “Item xx Prices” as the name. And for the latter I simply read all the Title field values from one of these lists.$itemNames = (Get-PnpList | Where-Object { $_.Title -match \"^Item.*Prices$\" }).Title\r\n\r\n$countryNames = (Get-PnPListItem -List \"Item 1 Prices\").FieldValues.TitleThen I’d want to process each country. Check if there’s a list with that country name already, and if not create it. Then check if the list (new or existing) has a field called Price, and if not create it. And lastly go through each of the item lists, find the price of the item for that country, and add these as rows to the newly created list.\nI did it this way (the snippet includes the two lookups I showed above too).$itemNames = (Get-PnpList | Where-Object { $_.Title -match \"^Item.*Prices$\" }).Title\r\n\r\n$countryNames = (Get-PnPListItem -List \"Item 1 Prices\").FieldValues.Title\r\n\r\nforeach ($country in $countryNames) {\r\n $newListName = \"Country: $country\"\r\n\r\n Write-Progress \"Processing $newListName\"\r\n\r\n try {\r\n Get-PnPList -Identity \"$newListName\" -ErrorAction Stop | Out-Null\r\n Write-Progress \"List exists $newListName\"\r\n } catch {\r\n if ($_.Exception.Message -match \"does not exist at site\") {\r\n Write-Progress \"Creating new list $newListName\"\r\n try {\r\n New-PnPList -Title \"$newListName\" -Template GenericList -ErrorAction Stop | Out-Null\r\n Write-Progress \"Created new list $newListName\"\r\n Start-Sleep -Seconds 5\r\n } catch {\r\n Write-Warning \"Something went wrong. Skipping.\"\r\n continue\r\n }\r\n }\r\n }\r\n\r\n $newfield = \"Price\"\r\n if ((Get-PnPField -List \"$newListName\").Title -notcontains \"$newfield\") {\r\n Write-Progress -Id 1 \"Adding the $newfield field\"\r\n try {\r\n Add-PnPField -List \"$newListName\" -DisplayName \"$newfield\" -InternalName \"$newfield\" -Type \"Note\" -AddToDefaultView -ErrorAction Stop | Out-Null\r\n } catch {\r\n Write-Warning \"Error adding the $newfield field. Skipping.\"\r\n continue\r\n }\r\n }\r\n\r\n foreach ($item in $itemNames) {\r\n Write-Progress -Id 1 \"Processing item $item\"\r\n # find the field used for that country \r\n try {\r\n $result = Get-PnPListItem -List \"$item\" -ErrorAction Stop | Where-Object { $_.FieldValues.Title -eq \"$country\" } \r\n } catch {\r\n Write-Warning \"Error getting field. Skipping.\"\r\n continue\r\n }\r\n\r\n $values = @{\r\n \"Title\" = \"$item\"\r\n \"Price\" = if ($result.FieldValues.Keys -contains \"Price\") { $result.FieldValues.Price }\r\n }\r\n\r\n # The query is a CAML query - https://pnp.github.io/powershell/cmdlets/Get-PnPListItem.html#example-6 \r\n try {\r\n Write-Progress -Id 2 \"Looking up item $item in list $newListName\"\r\n $listItem = Get-PnpListItem -List \"$newListName\" -ErrorAction Stop `\r\n -Query \"<View><Query><Where><Eq><FieldRef Name='Title'/><Value Type='Text'>$item</Value></Eq></Where></Query></View>\"\r\n\r\n } catch {\r\n Write-Warning \"Error getting existing item. Skipping.\"\r\n continue\r\n }\r\n\r\n if ($listItem) {\r\n Write-Progress -Id 2 \"Updating existing item\"\r\n try {\r\n Set-PnPListItem -List \"$newListName\" -Values $values -Identity $listItem -ErrorAction Stop | Out-Null\r\n } catch {\r\n Write-Warning \"Error updating $field in $newListName\"\r\n }\r\n } else {\r\n Write-Progress -Id 2 \"Creating new item\"\r\n try {\r\n Add-PnPListItem -List \"$newListName\" -Values $values -ErrorAction Stop | Out-Null\r\n } catch {\r\n Write-Warning \"Error adding $field to $newListName\"\r\n }\r\n }\r\n }\r\n}Now I can have this running periodically (I suppose I could look into triggering this whenever there’s a change to the item lists) and have the country lists up to date. Nice, huh! \nHere’s an example country list:\n\nAnd here’s all the automatically created lists, with 6 items each:", "date_published": "2023-08-06T18:49:50+01:00", "date_modified": "2023-08-06T18:51:27+01:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "pnp.powershell", "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7354", "url": "https://rakhesh.com/wordpress/jetpack-connection-is-broken/", "title": "Jetpack connection is broken", "content_html": "I was trying to add the Mastodon connection to this blog today and once I’d enter my credentials to connect Jetpack errored out with a Jetpack connection is broken error.
\nStep 5 of the troubleshooting steps was to check if /xmlrpc.php is accessible. It should give the following message if all goes well: XML-RPC server accepts POST requests only
. In my case it didn’t. First I got an error that the site doesn’t support https; and once I clicked past that the connection was reset.
I don’t remember actively disabling it, but I use Cloudways and Googled along those lines. Sure enough, Cloudways helpfully disables it for security reasons. So I went there, enabled it, tried again and now I could connect the two.
\nBut then I disabled it again, and now the Jetpack page has forgotten the Mastodon connection and still complains that it can’t access my site. I do see the Mastodon connection when typing this post though, so hopefully it posts there coz my blog is now aware of it.
\n", "content_text": "I was trying to add the Mastodon connection to this blog today and once I’d enter my credentials to connect Jetpack errored out with a Jetpack connection is broken error.\nStep 5 of the troubleshooting steps was to check if /xmlrpc.php is accessible. It should give the following message if all goes well: XML-RPC server accepts POST requests only. In my case it didn’t. First I got an error that the site doesn’t support https; and once I clicked past that the connection was reset.\nI don’t remember actively disabling it, but I use Cloudways and Googled along those lines. Sure enough, Cloudways helpfully disables it for security reasons. So I went there, enabled it, tried again and now I could connect the two.\nBut then I disabled it again, and now the Jetpack page has forgotten the Mastodon connection and still complains that it can’t access my site. I do see the Mastodon connection when typing this post though, so hopefully it posts there coz my blog is now aware of it.", "date_published": "2023-08-05T10:47:51+01:00", "date_modified": "2023-08-05T10:47:51+01:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "WordPress" ] }, { "id": "https://rakhesh.com/?p=7343", "url": "https://rakhesh.com/azure/storing-api-keys-in-power-platform-custom-connectors/", "title": "Storing API keys in Power Platform custom connectors", "content_html": "I was working with a colleague on creating a custom connector to talk to Fresh Service. He did most of the hard work, I got involved towards the end to figure out some authentication stuff.
\nWith Fresh, for instance, to authenticate you need to send an API key.
curl -v -u apikey:X -H \"Content-Type: application/json\" -X GET 'https://domain.freshservice.com/api/v2/tickets'
The username is the API key you get from their website, the password is X (capital X).
\nThe equivalent of the above in PowerShell would be:
$apiKey = \"whatever\"\r\n$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes((\"{0}:{1}\" -f $apiKey,\"X\")))\r\n\r\n$freshHeaders = @{\r\n \"Authorization\" = \"Basic $base64AuthInfo\"\r\n \"Content-Type\" = \"application/json\"\r\n}\r\n\r\nInvoke-RestMethod -Headers $freshHeaders -Method 'GET' -Uri \"${freshBaseUrl}/api/v2/tickets\"
Basically you have to convert the “API Key:X” bit to Base64 and then send it along.
\nSo far so good. But how about when using a custom connector? When creating one you can select Basic authentication but that just prompts the user to add the API Key and X when creating a connection using the custom connector. Which might be fine too, coz you’d want each user to use their own API key after all.
\n\nBut we wanted to provide the connector as something users can use without entering their key. We have a “service account” in Fresh and want to use its API key as that’s got additional rights. But at the same time we don’t want to open it up for everyone. A custom connector is perfect in that respect because we can expose just the API actions we need… if only we could figure a way of putting this key somewhere!
\nThe trick is to set the authentication as “No authentication”.
\n\nAnd then, in the Definition section, under Policies, create a new policy.
\n\nAnd here, give it a name, then choose the “Set HTTP header” template.
\nAfter that fill as follows:
\n\nReplace the bit after “Basic” with the Base64 encoded value of <username>:<password>
. Which in the case of Fresh is APIKey:X
.
That’s it. Now your custom connector won’t prompt users for an API key.
\nIf you export the connector, this info isn’t exported. When you import it elsewhere you will have to add it again. Which is good.
\nThere is a small catch in this though, in that when you publish the custom connector in an environment with View rights, even though the edit icon is grayed out:
\n\nSomeone can still click the three dots, go to View properties:
\n\nAnd notice the “Edit” button there? Yeah… they can click it to see everything!
\n\nUsers can’t change anything, but this leaves the API key you added above visible to them.
\nWhich is a bummer of course. I’ve raised a ticket with Microsoft to see if we can do something about this. There’s no reason why someone with View rights should be able to see inside the custom connector. Technically, I understand, they are still only viewing the information.. but still, that’s not what one was expecting.
\nAnother weird behaviour with custom connectors.
\nIf you add it within an environment it respects the share permissions. As in, only those whom you make it available to as View or Edit or View & Share can actually see it (albeit also see inside it as above). On the other hand, if you add the custom connector to a solution in the environment – be it adding directly, or adding to the environment and then importing into the solution – then everyone can now see the custom connector and also edit/ delete it. Crazy!
\nSo, always import custom connectors into your environment and not solution.
\n", "content_text": "I was working with a colleague on creating a custom connector to talk to Fresh Service. He did most of the hard work, I got involved towards the end to figure out some authentication stuff.\nWith Fresh, for instance, to authenticate you need to send an API key.curl -v -u apikey:X -H \"Content-Type: application/json\" -X GET 'https://domain.freshservice.com/api/v2/tickets'The username is the API key you get from their website, the password is X (capital X).\nThe equivalent of the above in PowerShell would be:$apiKey = \"whatever\"\r\n$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes((\"{0}:{1}\" -f $apiKey,\"X\")))\r\n\r\n$freshHeaders = @{\r\n \"Authorization\" = \"Basic $base64AuthInfo\"\r\n \"Content-Type\" = \"application/json\"\r\n}\r\n\r\nInvoke-RestMethod -Headers $freshHeaders -Method 'GET' -Uri \"${freshBaseUrl}/api/v2/tickets\"Basically you have to convert the “API Key:X” bit to Base64 and then send it along.\nSo far so good. But how about when using a custom connector? When creating one you can select Basic authentication but that just prompts the user to add the API Key and X when creating a connection using the custom connector. Which might be fine too, coz you’d want each user to use their own API key after all.\n\nBut we wanted to provide the connector as something users can use without entering their key. We have a “service account” in Fresh and want to use its API key as that’s got additional rights. But at the same time we don’t want to open it up for everyone. A custom connector is perfect in that respect because we can expose just the API actions we need… if only we could figure a way of putting this key somewhere!\nThe trick is to set the authentication as “No authentication”.\n\nAnd then, in the Definition section, under Policies, create a new policy.\n\nAnd here, give it a name, then choose the “Set HTTP header” template.\nAfter that fill as follows:\n\nReplace the bit after “Basic” with the Base64 encoded value of <username>:<password>. Which in the case of Fresh is APIKey:X.\nThat’s it. Now your custom connector won’t prompt users for an API key.\nIf you export the connector, this info isn’t exported. When you import it elsewhere you will have to add it again. Which is good.\nThere is a small catch in this though, in that when you publish the custom connector in an environment with View rights, even though the edit icon is grayed out:\n\nSomeone can still click the three dots, go to View properties:\n\nAnd notice the “Edit” button there? Yeah… they can click it to see everything! \n\nUsers can’t change anything, but this leaves the API key you added above visible to them.\nWhich is a bummer of course. I’ve raised a ticket with Microsoft to see if we can do something about this. There’s no reason why someone with View rights should be able to see inside the custom connector. Technically, I understand, they are still only viewing the information.. but still, that’s not what one was expecting.\nAnother weird behaviour with custom connectors.\nIf you add it within an environment it respects the share permissions. As in, only those whom you make it available to as View or Edit or View & Share can actually see it (albeit also see inside it as above). On the other hand, if you add the custom connector to a solution in the environment – be it adding directly, or adding to the environment and then importing into the solution – then everyone can now see the custom connector and also edit/ delete it. Crazy!\nSo, always import custom connectors into your environment and not solution.", "date_published": "2023-08-04T19:15:36+01:00", "date_modified": "2023-08-04T19:16:17+01:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "authentication", "power automate", "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7338", "url": "https://rakhesh.com/azure/well-known-client-ids/", "title": "Well known client ids", "content_html": "From Authentication with a data source – Power Query
\nThe following Azure Active Directory client IDs are used by Power Query. You might need to explicitly allow these client IDs to access your service and API, depending on your overall Azure Active Directory settings.
\nClient ID | \nTitle | \nDescription | \n
---|---|---|
a672d62c-fc7b-4e81-a576-e60dc46e951d | \nPower Query for Excel | \nPublic client, used in Power BI Desktop and Gateway. | \n
b52893c8-bc2e-47fc-918b-77022b299bbc | \nPower BI Data Refresh | \nConfidential client, used in Power BI service. | \n
7ab7862c-4c57-491e-8a45-d52a7e023983 | \nPower Apps and Power Automate | \nConfidential client, used in Power Apps and Power Automate. | \n
The following table lists some, but not all, first-party Microsoft applications. You may see these applications in the Sign-ins report in Azure AD.
\nApplication Name | \nApplication IDs | \n
---|---|
ACOM Azure Website | \n23523755-3a2b-41ca-9315-f81f3f566a95 | \n
AEM-DualAuth | \n69893ee3-dd10-4b1c-832d-4870354be3d8 | \n
App Service | \n7ab7862c-4c57-491e-8a45-d52a7e023983 | \n
ASM Campaign Servicing | \n0cb7b9ec-5336-483b-bc31-b15b5788de71 | \n
Azure Advanced Threat Protection | \n7b7531ad-5926-4f2d-8a1d-38495ad33e17 | \n
Azure Data Lake | \ne9f49c6b-5ce5-44c8-925d-015017e9f7ad | \n
Azure Lab Services Portal | \n835b2a73-6e10-4aa5-a979-21dfda45231c | \n
Azure Portal | \nc44b4083-3bb0-49c1-b47d-974e53cbdf3c | \n
AzureSupportCenter | \n37182072-3c9c-4f6a-a4b3-b3f91cacffce | \n
Bing | \n9ea1ad79-fdb6-4f9a-8bc3-2b70f96e34c7 | \n
ContactsInferencingEmailProcessor | \n20a11fe0-faa8-4df5-baf2-f965f8f9972e | \n
CPIM Service | \nbb2a2e3a-c5e7-4f0a-88e0-8e01fd3fc1f4 | \n
CRM Power BI Integration | \ne64aa8bc-8eb4-40e2-898b-cf261a25954f | \n
Dataverse | \n00000007-0000-0000-c000-000000000000 | \n
Enterprise Roaming and Backup | \n60c8bde5-3167-4f92-8fdb-059f6176dc0f | \n
Exchange Admin Center | \n497effe9-df71-4043-a8bb-14cf78c4b63b | \n
FindTime | \nf5eaa862-7f08-448c-9c4e-f4047d4d4521 | \n
Focused Inbox | \nb669c6ea-1adf-453f-b8bc-6d526592b419 | \n
GroupsRemoteApiRestClient | \nc35cb2ba-f88b-4d15-aa9d-37bd443522e1 | \n
HxService | \nd9b8ec3a-1e4e-4e08-b3c2-5baf00c0fcb0 | \n
IAM Supportability | \na57aca87-cbc0-4f3c-8b9e-dc095fdc8978 | \n
IrisSelectionFrontDoor | \n16aeb910-ce68-41d1-9ac3-9e1673ac9575 | \n
MCAPI Authorization Prod | \nd73f4b35-55c9-48c7-8b10-651f6f2acb2e | \n
Media Analysis and Transformation Service | \n944f0bd1-117b-4b1c-af26-804ed95e767e \n0cd196ee-71bf-4fd6-a57c-b491ffd4fb1e | \n
Microsoft 365 Support Service | \nee272b19-4411-433f-8f28-5c13cb6fd407 | \n
Microsoft App Access Panel | \n0000000c-0000-0000-c000-000000000000 | \n
Microsoft Approval Management | \n65d91a3d-ab74-42e6-8a2f-0add61688c74 \n38049638-cc2c-4cde-abe4-4479d721ed44 | \n
Microsoft Authentication Broker | \n29d9ed98-a469-4536-ade2-f981bc1d605e | \n
Microsoft Azure CLI | \n04b07795-8ddb-461a-bbee-02f9e1bf7b46 | \n
Microsoft Azure PowerShell | \n1950a258-227b-4e31-a9cf-717495945fc2 | \n
MicrosoftAzureActiveAuthn | \n0000001a-0000-0000-c000-000000000000 | \n
Microsoft Bing Search | \ncf36b471-5b44-428c-9ce7-313bf84528de | \n
Microsoft Bing Search for Microsoft Edge | \n2d7f3606-b07d-41d1-b9d2-0d0c9296a6e8 | \n
Microsoft Bing Default Search Engine | \n1786c5ed-9644-47b2-8aa0-7201292175b6 | \n
Microsoft Defender for Cloud Apps | \n3090ab82-f1c1-4cdf-af2c-5d7a6f3e2cc7 | \n
Microsoft Docs | \n18fbca16-2224-45f6-85b0-f7bf2b39b3f3 | \n
Microsoft Dynamics ERP | \n00000015-0000-0000-c000-000000000000 | \n
Microsoft Edge Insider Addons Prod | \n6253bca8-faf2-4587-8f2f-b056d80998a7 | \n
Microsoft Exchange ForwardSync | \n99b904fd-a1fe-455c-b86c-2f9fb1da7687 | \n
Microsoft Exchange Online Protection | \n00000007-0000-0ff1-ce00-000000000000 | \n
Microsoft Exchange ProtectedServiceHost | \n51be292c-a17e-4f17-9a7e-4b661fb16dd2 | \n
Microsoft Exchange REST API Based Powershell | \nfb78d390-0c51-40cd-8e17-fdbfab77341b | \n
Microsoft Forms | \nc9a559d2-7aab-4f13-a6ed-e7e9c52aec87 | \n
Microsoft Graph | \n00000003-0000-0000-c000-000000000000 | \n
Microsoft Intune Web Company Portal | \n74bcdadc-2fdc-4bb3-8459-76d06952a0e9 | \n
Microsoft Intune Windows Agent | \nfc0f3af4-6835-4174-b806-f7db311fd2f3 | \n
Microsoft Office | \nd3590ed6-52b3-4102-aeff-aad2292ab01c | \n
Microsoft Office 365 Portal | \n00000006-0000-0ff1-ce00-000000000000 | \n
Microsoft Office Web Apps Service | \n67e3df25-268a-4324-a550-0de1c7f97287 | \n
Microsoft Online Syndication Partner Portal | \nd176f6e7-38e5-40c9-8a78-3998aab820e7 | \n
Microsoft password reset service | \n93625bc8-bfe2-437a-97e0-3d0060024faa | \n
Microsoft Power BI | \n871c010f-5e61-4fb1-83ac-98610a7e9110 | \n
Microsoft Storefronts | \n28b567f6-162c-4f54-99a0-6887f387bbcc | \n
Microsoft Stream Portal | \ncf53fce8-def6-4aeb-8d30-b158e7b1cf83 | \n
Microsoft Substrate Management | \n98db8bd6-0cc0-4e67-9de5-f187f1cd1b41 | \n
Microsoft Support | \nfdf9885b-dd37-42bf-82e5-c3129ef5a302 | \n
Microsoft Teams | \n1fec8e78-bce4-4aaf-ab1b-5451cc387264 | \n
Microsoft Teams Services | \ncc15fd57-2c6c-4117-a88c-83b1d56b4bbe | \n
Microsoft Teams Web Client | \n5e3ce6c0-2b1f-4285-8d4b-75ee78787346 | \n
Microsoft Whiteboard Services | \n95de633a-083e-42f5-b444-a4295d8e9314 | \n
O365 SkypeSpaces Ingestion Service | \ndfe74da8-9279-44ec-8fb2-2aed9e1c73d0 | \n
O365 Suite UX | \n4345a7b9-9a63-4910-a426-35363201d503 | \n
Office 365 Exchange Online | \n00000002-0000-0ff1-ce00-000000000000 | \n
Office 365 Management | \n00b41c95-dab0-4487-9791-b9d2c32c80f2 | \n
Office 365 Search Service | \n66a88757-258c-4c72-893c-3e8bed4d6899 | \n
Office 365 SharePoint Online | \n00000003-0000-0ff1-ce00-000000000000 | \n
Office Delve | \n94c63fef-13a3-47bc-8074-75af8c65887a | \n
Office Online Add-in SSO | \n93d53678-613d-4013-afc1-62e9e444a0a5 | \n
Office Online Client AAD- Augmentation Loop | \n2abdc806-e091-4495-9b10-b04d93c3f040 | \n
Office Online Client AAD- Loki | \nb23dd4db-9142-4734-867f-3577f640ad0c | \n
Office Online Client AAD- Maker | \n17d5e35f-655b-4fb0-8ae6-86356e9a49f5 | \n
Office Online Client MSA- Loki | \nb6e69c34-5f1f-4c34-8cdf-7fea120b8670 | \n
Office Online Core SSO | \n243c63a3-247d-41c5-9d83-7788c43f1c43 | \n
Office Online Search | \na9b49b65-0a12-430b-9540-c80b3332c127 | \n
Office.com | \n4b233688-031c-404b-9a80-a4f3f2351f90 | \n
Office365 Shell WCSS-Client | \n89bee1f7-5e6e-4d8a-9f3d-ecd601259da7 | \n
OfficeClientService | \n0f698dd4-f011-4d23-a33e-b36416dcb1e6 | \n
OfficeHome | \n4765445b-32c6-49b0-83e6-1d93765276ca | \n
OfficeShredderWacClient | \n4d5c2d63-cf83-4365-853c-925fd1a64357 | \n
OMSOctopiPROD | \n62256cef-54c0-4cb4-bcac-4c67989bdc40 | \n
OneDrive SyncEngine | \nab9b8c07-8f02-4f72-87fa-80105867a763 | \n
OneNote | \n2d4d3d8e-2be3-4bef-9f87-7875a61c29de | \n
Outlook Mobile | \n27922004-5251-4030-b22d-91ecd9a37ea4 | \n
Partner Customer Delegated Admin Offline Processor | \na3475900-ccec-4a69-98f5-a65cd5dc5306 | \n
Password Breach Authenticator | \nbdd48c81-3a58-4ea9-849c-ebea7f6b6360 | \n
PeoplePredictions | \n35d54a08-36c9-4847-9018-93934c62740c | \n
Power BI Service | \n00000009-0000-0000-c000-000000000000 | \n
Scheduling | \nae8e128e-080f-4086-b0e3-4c19301ada69 | \n
SharedWithMe | \nffcb16e8-f789-467c-8ce9-f826a080d987 | \n
SharePoint Online Web Client Extensibility | \n08e18876-6177-487e-b8b5-cf950c1e598c | \n
Signup | \nb4bddae8-ab25-483e-8670-df09b9f1d0ea | \n
Skype for Business Online | \n00000004-0000-0ff1-ce00-000000000000 | \n
SpoolsProvisioning | \n61109738-7d2b-4a0b-9fe3-660b1ff83505 | \n
Sticky Notes API | \n91ca2ca5-3b3e-41dd-ab65-809fa3dffffa | \n
Substrate Context Service | \n13937bba-652e-4c46-b222-3003f4d1ff97 | \n
SubstrateDirectoryEventProcessor | \n26abc9a8-24f0-4b11-8234-e86ede698878 | \n
Substrate Search Settings Management Service | \na970bac6-63fe-4ec5-8884-8536862c42d4 | \n
Sway | \n905fcf26-4eb7-48a0-9ff0-8dcc7194b5ba | \n
Transcript Ingestion | \n97cb1f73-50df-47d1-8fb0-0271f2728514 | \n
Universal Store Native Client | \n268761a2-03f3-40df-8a8b-c3db24145b6b | \n
Viva Engage (formerly Yammer) | \n00000005-0000-0ff1-ce00-000000000000 | \n
WeveEngine | \n3c896ded-22c5-450f-91f6-3d1ef0848f6e | \n
Windows Azure Active Directory | \n00000002-0000-0000-c000-000000000000 | \n
Windows Azure Security Resource Provider | \n8edd93e1-2103-40b4-bd70-6e34e586362d | \n
Windows Azure Service Management API | \n797f4846-ba00-4fd7-ba43-dac1f8f63013 | \n
WindowsDefenderATP Portal | \na3b79187-70b2-4139-83f9-6016c58cd27b | \n
Windows Search | \n26a7ee05-5602-4d76-a7ba-eae8b7b67941 | \n
Windows Spotlight | \n1b3c667f-cde3-4090-b60b-3d2abd0117f0 | \n
Windows Store for Business | \n45a330b1-b1ec-4cc1-9161-9f03992aa49f | \n
Yammer Web | \nc1c74fed-04c9-4704-80dc-9f79a2e515cb | \n
Yammer Web Embed | \ne1ef36fd-b883-4dbf-97f0-9ece4b576fc6 | \n
The following table lists some, but not all, Microsoft tenant-owned applications (tenant ID: 72f988bf-86f1-41af-91ab-2d7cd011db47).
\nApplication Name | \nApplication IDs | \n
---|---|
Graph Explorer | \nde8bc8b5-d9f9-48b1-a8ad-b748da725064 | \n
Microsoft Graph Command Line Tools | \n14d82eec-204b-4c2f-b7e8-296a70dab67e | \n
OutlookUserSettingsConsumer | \n7ae974c5-1af7-4923-af3a-fb1fd14dcb7e | \n
Vortex | \n5572c4c0-d078-44ce-b81c-6cbf8d3ed39e | \n
Update (15th August 2023): See this GitHub repo by Merill. And associated Tweet.
\n", "content_text": "From Authentication with a data source – Power Query\nThe following Azure Active Directory client IDs are used by Power Query. You might need to explicitly allow these client IDs to access your service and API, depending on your overall Azure Active Directory settings.\n\n\n\nClient ID\nTitle\nDescription\n\n\na672d62c-fc7b-4e81-a576-e60dc46e951d\nPower Query for Excel\nPublic client, used in Power BI Desktop and Gateway.\n\n\nb52893c8-bc2e-47fc-918b-77022b299bbc\nPower BI Data Refresh\nConfidential client, used in Power BI service.\n\n\n7ab7862c-4c57-491e-8a45-d52a7e023983\nPower Apps and Power Automate\nConfidential client, used in Power Apps and Power Automate.\n\n\n\nApplication IDs of commonly used Microsoft applications\nFrom SupportArticles-docs/support/azure/active-directory/verify-first-party-apps-sign-in.md at main \u00b7 MicrosoftDocs/SupportArticles-docs\nThe following table lists some, but not all, first-party Microsoft applications. You may see these applications in the Sign-ins report in Azure AD.\n\n\n\nApplication Name\nApplication IDs\n\n\nACOM Azure Website\n23523755-3a2b-41ca-9315-f81f3f566a95\n\n\nAEM-DualAuth\n69893ee3-dd10-4b1c-832d-4870354be3d8\n\n\nApp Service\n7ab7862c-4c57-491e-8a45-d52a7e023983\n\n\nASM Campaign Servicing\n0cb7b9ec-5336-483b-bc31-b15b5788de71\n\n\nAzure Advanced Threat Protection\n7b7531ad-5926-4f2d-8a1d-38495ad33e17\n\n\nAzure Data Lake\ne9f49c6b-5ce5-44c8-925d-015017e9f7ad\n\n\nAzure Lab Services Portal\n835b2a73-6e10-4aa5-a979-21dfda45231c\n\n\nAzure Portal\nc44b4083-3bb0-49c1-b47d-974e53cbdf3c\n\n\nAzureSupportCenter\n37182072-3c9c-4f6a-a4b3-b3f91cacffce\n\n\nBing\n9ea1ad79-fdb6-4f9a-8bc3-2b70f96e34c7\n\n\nContactsInferencingEmailProcessor\n20a11fe0-faa8-4df5-baf2-f965f8f9972e\n\n\nCPIM Service\nbb2a2e3a-c5e7-4f0a-88e0-8e01fd3fc1f4\n\n\nCRM Power BI Integration\ne64aa8bc-8eb4-40e2-898b-cf261a25954f\n\n\nDataverse\n00000007-0000-0000-c000-000000000000\n\n\nEnterprise Roaming and Backup\n60c8bde5-3167-4f92-8fdb-059f6176dc0f\n\n\nExchange Admin Center\n497effe9-df71-4043-a8bb-14cf78c4b63b\n\n\nFindTime\nf5eaa862-7f08-448c-9c4e-f4047d4d4521\n\n\nFocused Inbox\nb669c6ea-1adf-453f-b8bc-6d526592b419\n\n\nGroupsRemoteApiRestClient\nc35cb2ba-f88b-4d15-aa9d-37bd443522e1\n\n\nHxService\nd9b8ec3a-1e4e-4e08-b3c2-5baf00c0fcb0\n\n\nIAM Supportability\na57aca87-cbc0-4f3c-8b9e-dc095fdc8978\n\n\nIrisSelectionFrontDoor\n16aeb910-ce68-41d1-9ac3-9e1673ac9575\n\n\nMCAPI Authorization Prod\nd73f4b35-55c9-48c7-8b10-651f6f2acb2e\n\n\nMedia Analysis and Transformation Service\n944f0bd1-117b-4b1c-af26-804ed95e767e\n0cd196ee-71bf-4fd6-a57c-b491ffd4fb1e\n\n\nMicrosoft 365 Support Service\nee272b19-4411-433f-8f28-5c13cb6fd407\n\n\nMicrosoft App Access Panel\n0000000c-0000-0000-c000-000000000000\n\n\nMicrosoft Approval Management\n65d91a3d-ab74-42e6-8a2f-0add61688c74\n38049638-cc2c-4cde-abe4-4479d721ed44\n\n\nMicrosoft Authentication Broker\n29d9ed98-a469-4536-ade2-f981bc1d605e\n\n\nMicrosoft Azure CLI\n04b07795-8ddb-461a-bbee-02f9e1bf7b46\n\n\nMicrosoft Azure PowerShell\n1950a258-227b-4e31-a9cf-717495945fc2\n\n\nMicrosoftAzureActiveAuthn\n0000001a-0000-0000-c000-000000000000\n\n\nMicrosoft Bing Search\ncf36b471-5b44-428c-9ce7-313bf84528de\n\n\nMicrosoft Bing Search for Microsoft Edge\n2d7f3606-b07d-41d1-b9d2-0d0c9296a6e8\n\n\nMicrosoft Bing Default Search Engine\n1786c5ed-9644-47b2-8aa0-7201292175b6\n\n\nMicrosoft Defender for Cloud Apps\n3090ab82-f1c1-4cdf-af2c-5d7a6f3e2cc7\n\n\nMicrosoft Docs\n18fbca16-2224-45f6-85b0-f7bf2b39b3f3\n\n\nMicrosoft Dynamics ERP\n00000015-0000-0000-c000-000000000000\n\n\nMicrosoft Edge Insider Addons Prod\n6253bca8-faf2-4587-8f2f-b056d80998a7\n\n\nMicrosoft Exchange ForwardSync\n99b904fd-a1fe-455c-b86c-2f9fb1da7687\n\n\nMicrosoft Exchange Online Protection\n00000007-0000-0ff1-ce00-000000000000\n\n\nMicrosoft Exchange ProtectedServiceHost\n51be292c-a17e-4f17-9a7e-4b661fb16dd2\n\n\nMicrosoft Exchange REST API Based Powershell\nfb78d390-0c51-40cd-8e17-fdbfab77341b\n\n\nMicrosoft Forms\nc9a559d2-7aab-4f13-a6ed-e7e9c52aec87\n\n\nMicrosoft Graph\n00000003-0000-0000-c000-000000000000\n\n\nMicrosoft Intune Web Company Portal\n74bcdadc-2fdc-4bb3-8459-76d06952a0e9\n\n\nMicrosoft Intune Windows Agent\nfc0f3af4-6835-4174-b806-f7db311fd2f3\n\n\nMicrosoft Office\nd3590ed6-52b3-4102-aeff-aad2292ab01c\n\n\nMicrosoft Office 365 Portal\n00000006-0000-0ff1-ce00-000000000000\n\n\nMicrosoft Office Web Apps Service\n67e3df25-268a-4324-a550-0de1c7f97287\n\n\nMicrosoft Online Syndication Partner Portal\nd176f6e7-38e5-40c9-8a78-3998aab820e7\n\n\nMicrosoft password reset service\n93625bc8-bfe2-437a-97e0-3d0060024faa\n\n\nMicrosoft Power BI\n871c010f-5e61-4fb1-83ac-98610a7e9110\n\n\nMicrosoft Storefronts\n28b567f6-162c-4f54-99a0-6887f387bbcc\n\n\nMicrosoft Stream Portal\ncf53fce8-def6-4aeb-8d30-b158e7b1cf83\n\n\nMicrosoft Substrate Management\n98db8bd6-0cc0-4e67-9de5-f187f1cd1b41\n\n\nMicrosoft Support\nfdf9885b-dd37-42bf-82e5-c3129ef5a302\n\n\nMicrosoft Teams\n1fec8e78-bce4-4aaf-ab1b-5451cc387264\n\n\nMicrosoft Teams Services\ncc15fd57-2c6c-4117-a88c-83b1d56b4bbe\n\n\nMicrosoft Teams Web Client\n5e3ce6c0-2b1f-4285-8d4b-75ee78787346\n\n\nMicrosoft Whiteboard Services\n95de633a-083e-42f5-b444-a4295d8e9314\n\n\nO365 SkypeSpaces Ingestion Service\ndfe74da8-9279-44ec-8fb2-2aed9e1c73d0\n\n\nO365 Suite UX\n4345a7b9-9a63-4910-a426-35363201d503\n\n\nOffice 365 Exchange Online\n00000002-0000-0ff1-ce00-000000000000\n\n\nOffice 365 Management\n00b41c95-dab0-4487-9791-b9d2c32c80f2\n\n\nOffice 365 Search Service\n66a88757-258c-4c72-893c-3e8bed4d6899\n\n\nOffice 365 SharePoint Online\n00000003-0000-0ff1-ce00-000000000000\n\n\nOffice Delve\n94c63fef-13a3-47bc-8074-75af8c65887a\n\n\nOffice Online Add-in SSO\n93d53678-613d-4013-afc1-62e9e444a0a5\n\n\nOffice Online Client AAD- Augmentation Loop\n2abdc806-e091-4495-9b10-b04d93c3f040\n\n\nOffice Online Client AAD- Loki\nb23dd4db-9142-4734-867f-3577f640ad0c\n\n\nOffice Online Client AAD- Maker\n17d5e35f-655b-4fb0-8ae6-86356e9a49f5\n\n\nOffice Online Client MSA- Loki\nb6e69c34-5f1f-4c34-8cdf-7fea120b8670\n\n\nOffice Online Core SSO\n243c63a3-247d-41c5-9d83-7788c43f1c43\n\n\nOffice Online Search\na9b49b65-0a12-430b-9540-c80b3332c127\n\n\nOffice.com\n4b233688-031c-404b-9a80-a4f3f2351f90\n\n\nOffice365 Shell WCSS-Client\n89bee1f7-5e6e-4d8a-9f3d-ecd601259da7\n\n\nOfficeClientService\n0f698dd4-f011-4d23-a33e-b36416dcb1e6\n\n\nOfficeHome\n4765445b-32c6-49b0-83e6-1d93765276ca\n\n\nOfficeShredderWacClient\n4d5c2d63-cf83-4365-853c-925fd1a64357\n\n\nOMSOctopiPROD\n62256cef-54c0-4cb4-bcac-4c67989bdc40\n\n\nOneDrive SyncEngine\nab9b8c07-8f02-4f72-87fa-80105867a763\n\n\nOneNote\n2d4d3d8e-2be3-4bef-9f87-7875a61c29de\n\n\nOutlook Mobile\n27922004-5251-4030-b22d-91ecd9a37ea4\n\n\nPartner Customer Delegated Admin Offline Processor\na3475900-ccec-4a69-98f5-a65cd5dc5306\n\n\nPassword Breach Authenticator\nbdd48c81-3a58-4ea9-849c-ebea7f6b6360\n\n\nPeoplePredictions\n35d54a08-36c9-4847-9018-93934c62740c\n\n\nPower BI Service\n00000009-0000-0000-c000-000000000000\n\n\nScheduling\nae8e128e-080f-4086-b0e3-4c19301ada69\n\n\nSharedWithMe\nffcb16e8-f789-467c-8ce9-f826a080d987\n\n\nSharePoint Online Web Client Extensibility\n08e18876-6177-487e-b8b5-cf950c1e598c\n\n\nSignup\nb4bddae8-ab25-483e-8670-df09b9f1d0ea\n\n\nSkype for Business Online\n00000004-0000-0ff1-ce00-000000000000\n\n\nSpoolsProvisioning\n61109738-7d2b-4a0b-9fe3-660b1ff83505\n\n\nSticky Notes API\n91ca2ca5-3b3e-41dd-ab65-809fa3dffffa\n\n\nSubstrate Context Service\n13937bba-652e-4c46-b222-3003f4d1ff97\n\n\nSubstrateDirectoryEventProcessor\n26abc9a8-24f0-4b11-8234-e86ede698878\n\n\nSubstrate Search Settings Management Service\na970bac6-63fe-4ec5-8884-8536862c42d4\n\n\nSway\n905fcf26-4eb7-48a0-9ff0-8dcc7194b5ba\n\n\nTranscript Ingestion\n97cb1f73-50df-47d1-8fb0-0271f2728514\n\n\nUniversal Store Native Client\n268761a2-03f3-40df-8a8b-c3db24145b6b\n\n\nViva Engage (formerly Yammer)\n00000005-0000-0ff1-ce00-000000000000\n\n\nWeveEngine\n3c896ded-22c5-450f-91f6-3d1ef0848f6e\n\n\nWindows Azure Active Directory\n00000002-0000-0000-c000-000000000000\n\n\nWindows Azure Security Resource Provider\n8edd93e1-2103-40b4-bd70-6e34e586362d\n\n\nWindows Azure Service Management API\n797f4846-ba00-4fd7-ba43-dac1f8f63013\n\n\nWindowsDefenderATP Portal\na3b79187-70b2-4139-83f9-6016c58cd27b\n\n\nWindows Search\n26a7ee05-5602-4d76-a7ba-eae8b7b67941\n\n\nWindows Spotlight\n1b3c667f-cde3-4090-b60b-3d2abd0117f0\n\n\nWindows Store for Business\n45a330b1-b1ec-4cc1-9161-9f03992aa49f\n\n\nYammer Web\nc1c74fed-04c9-4704-80dc-9f79a2e515cb\n\n\nYammer Web Embed\ne1ef36fd-b883-4dbf-97f0-9ece4b576fc6\n\n\n\nApplication IDs of Microsoft tenant-owned applications\nThe following table lists some, but not all, Microsoft tenant-owned applications (tenant ID: 72f988bf-86f1-41af-91ab-2d7cd011db47).\n\n\n\nApplication Name\nApplication IDs\n\n\nGraph Explorer\nde8bc8b5-d9f9-48b1-a8ad-b748da725064\n\n\nMicrosoft Graph Command Line Tools\n14d82eec-204b-4c2f-b7e8-296a70dab67e\n\n\nOutlookUserSettingsConsumer\n7ae974c5-1af7-4923-af3a-fb1fd14dcb7e\n\n\nVortex\n5572c4c0-d078-44ce-b81c-6cbf8d3ed39e\n\n\n\nUpdate (15th August 2023): See this GitHub repo by Merill. And associated Tweet.", "date_published": "2023-08-04T18:46:37+01:00", "date_modified": "2023-08-15T22:38:49+01:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "authentication", "azure ad", "microsoft graph", "power automate", "Azure, Azure AD, Graph, M365" ] }, { "id": "https://rakhesh.com/?p=7314", "url": "https://rakhesh.com/azure/getoffice365groupsactivitydetail-and-s2sunauthorized-and-custom-connectors/", "title": "getOffice365GroupsActivityDetail and S2SUnauthorized and custom connectors", "content_html": "It always impresses and scares me when I am trying to figure something out and while Googling I come across one of my own blog posts. Impressive, coz “wow I was so smart a few months ago and had figured this out!” but scary coz “shit, I’ve forgotten I ever did this”.
\nCase in hand, yesterday I blogged out the HTTP with Azure AD connector. I had forgotten there was a blog post on it, and while I was looking to do some more stuff with it today I came across two more on my side.
\nBetween these two posts, I think they capture everything I know (or should know) about this connector.
\nAnyways, a colleague wanted to use the HTTP with Azure AD connector to get reports. The account in question has the Reports Reader role, and he was trying the following URL: https://graph.microsoft.com/v1.0/reports/getOffice365GroupsActivityDetail(period='D90')
But it failed:
{\r\n \"error\": {\r\n \"code\": \"UnknownError\",\r\n \"message\": \"{\\\"error\\\":{\\\"code\\\":\\\"S2SUnauthorized\\\",\\\"message\\\":\\\"Invalid permission.\\\"}}\",\r\n \"innerError\": {\r\n \"date\": \"2023-08-04T09:25:01\",\r\n \"request-id\": \"364325b0-65aa-4034-ae12-53f1ac1c4ce3\",\r\n \"client-request-id\": \"364325b0-65aa-4034-ae12-53f1ac1c4ce3\"\r\n }\r\n }\r\n}
It’s not surprising it failed. The connector has a limited set of scopes:
\n\nThat is to say, it probably doesn’t have the Reports.Read.All
delegated permission assigned to it, so even though the account can do it the connector can’t. Bummer.
I could go the route of an HTTP connector, and I did spend some time fooling around with that, but eventually gave up. Using an HTTP connector has the draw back that I need to send the username and password as part of the password flow. There are options to authenticate via OAuth 2.0 with the HTTP connector but I couldn’t quite figure out how to make it work with an App Registration I created that gives the delegated Reports.Read.All
permissions. Could be that I was just being thick when figuring it out.
Anyways, using a custom connector is more fun. And reusable. Plus I can setup the connector for others in their environment, without handing over the secret to others. Way more nifty in my opinion.
\nSo here’s what I did.
\nFirst, I created an App Registration. Just a standard one, but I added the delegated Reports.Read.All
permission to it and did an admin consent.
Then I created a secret, and noted that.
\nNext, I went to Power Automate and created a custom connector.
\nGo to Data > Custom Connectors > New custom connector.
\n\nCreate from blank. Give it a name.
\n\nAnd description. And change host to graph.microsoft.com.
\n\nThen go to the next page, Security.
\nI want OAuth 2.0 and Azure AD.
\n\nFill in the rest of the details from the App Registration. I left the Tenant ID as common, but filled in the resource URL as https://graph.microsoft.com
(fill that exactly; I know this experience).
Click “Create Connector”.
\n\nThat should generate a Redirect URI.
\n\nI added that to the App Registration.
\n\nGo to the next section, Definition.
\nAdd an action.
\n\nUpdate: Later, I changed the above to be like this:
\n\nThat’s because I realized there’s no way to take an input of the number of days, so I might as well create separate ones.
\nAnd a request, for that action.
\n\nThe URL is https://graph.microsoft.com/v1.0/reports/getOffice365GroupsActivityDetail
Update: Later, I changed the URL to be https://graph.microsoft.com/v1.0/reports/getOffice365GroupsActivityDetail(period='D90')
to match the updated name above
Click on Response, add a default response,\u00a0 and I added the following:
Content-Type: text/plain\r\nLocation: https://reports.office.com/data/download/JDFKdf2_eJXKS034dbc7e0t__XDe
I got this from the API documentation.
\n\nThis one’s a bit odd in that the response is the headers.
\nAnd that’s it, save/ update the connector.
\nNow in my Power Automate, I can call this custom connector.
\n\nAfter selecting I can see the action I created.
\n\nSave, and test/ run it.
\nOn the face of it, it throws an error.
\n\nBut that’s misleading, because the headers show the content I want:
\n\nSo I create a Compose action after this connector, and use the Location as input.
\n\nAnd set it to run even after the connector has failed.
\n\nRun it now, and the flow succeeds.
\n\nI downloaded the JSON and added more actions to it.
{\r\n \"swagger\": \"2.0\",\r\n \"info\": {\r\n \"title\": \"Graph API - Reports\",\r\n \"description\": \"Gets reports from Graph API\",\r\n \"version\": \"1.0\"\r\n },\r\n \"host\": \"graph.microsoft.com\",\r\n \"basePath\": \"/\",\r\n \"schemes\": [\r\n \"https\"\r\n ],\r\n \"consumes\": [],\r\n \"produces\": [],\r\n \"paths\": {\r\n \"/v1.0/reports/getOffice365GroupsActivityDetail(period='D7')\": {\r\n \"get\": {\r\n \"responses\": {\r\n \"default\": {\r\n \"description\": \"default\",\r\n \"schema\": {\r\n \"type\": \"string\"\r\n },\r\n \"headers\": {\r\n \"Content-Type\": {\r\n \"description\": \"Content-Type\",\r\n \"type\": \"string\"\r\n },\r\n \"Location\": {\r\n \"description\": \"Location\",\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n },\r\n \"summary\": \"Get O365 Groups Activity Details - 7 days\",\r\n \"description\": \"Get O365 Groups Activity Details - 7 days\",\r\n \"operationId\": \"GetOffice365GroupsActivityDetail7\",\r\n \"parameters\": []\r\n }\r\n },\r\n \"/v1.0/reports/getOffice365GroupsActivityDetail(period='D30')\": {\r\n \"get\": {\r\n \"responses\": {\r\n \"default\": {\r\n \"description\": \"default\",\r\n \"schema\": {\r\n \"type\": \"string\"\r\n },\r\n \"headers\": {\r\n \"Content-Type\": {\r\n \"description\": \"Content-Type\",\r\n \"type\": \"string\"\r\n },\r\n \"Location\": {\r\n \"description\": \"Location\",\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n },\r\n \"summary\": \"Get O365 Groups Activity Details - 30 days\",\r\n \"description\": \"Get O365 Groups Activity Details - 30 days\",\r\n \"operationId\": \"GetOffice365GroupsActivityDetail30\",\r\n \"parameters\": []\r\n }\r\n },\r\n \"/v1.0/reports/getOffice365GroupsActivityDetail(period='D90')\": {\r\n \"get\": {\r\n \"responses\": {\r\n \"default\": {\r\n \"description\": \"default\",\r\n \"schema\": {},\r\n \"headers\": {\r\n \"Content-Type\": {\r\n \"description\": \"Content-Type\",\r\n \"type\": \"string\"\r\n },\r\n \"Location\": {\r\n \"description\": \"Location\",\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n },\r\n \"summary\": \"Get O365 Groups Activity Details - 90 days\",\r\n \"description\": \"Get O365 Groups Activity Details - 90 days\",\r\n \"operationId\": \"GetOffice365GroupsActivityDetail90\",\r\n \"parameters\": []\r\n }\r\n },\r\n \"/v1.0/reports/getOffice365GroupsActivityDetail(period='D180')\": {\r\n \"get\": {\r\n \"responses\": {\r\n \"default\": {\r\n \"description\": \"default\",\r\n \"schema\": {\r\n \"type\": \"string\"\r\n },\r\n \"headers\": {\r\n \"Content-Type\": {\r\n \"description\": \"Content-Type\",\r\n \"type\": \"string\"\r\n },\r\n \"Location\": {\r\n \"description\": \"Location\",\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n },\r\n \"summary\": \"Get O365 Groups Activity Details - 180 days\",\r\n \"description\": \"Get O365 Groups Activity Details - 180 days\",\r\n \"operationId\": \"GetOffice365GroupsActivityDetail180\",\r\n \"parameters\": []\r\n }\r\n }\r\n },\r\n \"definitions\": {},\r\n \"parameters\": {},\r\n \"responses\": {},\r\n \"securityDefinitions\": {\r\n \"oauth2-auth\": {\r\n \"type\": \"oauth2\",\r\n \"flow\": \"accessCode\",\r\n \"authorizationUrl\": \"https://login.microsoftonline.com/common/oauth2/authorize\",\r\n \"tokenUrl\": \"https://login.windows.net/common/oauth2/authorize\",\r\n \"scopes\": {\r\n \"Reports.Read.All\": \"Reports.Read.All\"\r\n }\r\n }\r\n },\r\n \"security\": [\r\n {\r\n \"oauth2-auth\": [\r\n \"Reports.Read.All\"\r\n ]\r\n }\r\n ],\r\n \"tags\": []\r\n}
These are all the valid “period” options for that call. I wish there was some way to take an input for GET operations. There is, if I switch to POST, but that’s not what I need.
\nUpdate: Later in the day, when Googling on something else, I came across this blog post from Microsoft. It’s a good one. And worth noting this point:
\nCustom connectors are supported by Microsoft Azure API Management infrastructure. When a connection to the underlying API is created, the API Management gateway stores the API credentials or tokens, depending on the type of authentication used, on a per-connection basis in a token store. This solution enables authentication at the connection level.
\n", "content_text": "It always impresses and scares me when I am trying to figure something out and while Googling I come across one of my own blog posts. Impressive, coz “wow I was so smart a few months ago and had figured this out!” but scary coz “shit, I’ve forgotten I ever did this”. \nCase in hand, yesterday I blogged out the HTTP with Azure AD connector. I had forgotten there was a blog post on it, and while I was looking to do some more stuff with it today I came across two more on my side.\n\nThis one, which is super informative on what else one can use that connector for. Heck, I even figured out how to use it for Forms and other stuff – sure, it was about 9 months ago and I have been busy with other stuff so don’t really remember it, but still…\nAnd this one, which is quite cool in that I use this connector to make requests to my Logic Apps.\n\nBetween these two posts, I think they capture everything I know (or should know) about this connector.\nAnyways, a colleague wanted to use the HTTP with Azure AD connector to get reports. The account in question has the Reports Reader role, and he was trying the following URL: https://graph.microsoft.com/v1.0/reports/getOffice365GroupsActivityDetail(period='D90')\nBut it failed:{\r\n \"error\": {\r\n \"code\": \"UnknownError\",\r\n \"message\": \"{\\\"error\\\":{\\\"code\\\":\\\"S2SUnauthorized\\\",\\\"message\\\":\\\"Invalid permission.\\\"}}\",\r\n \"innerError\": {\r\n \"date\": \"2023-08-04T09:25:01\",\r\n \"request-id\": \"364325b0-65aa-4034-ae12-53f1ac1c4ce3\",\r\n \"client-request-id\": \"364325b0-65aa-4034-ae12-53f1ac1c4ce3\"\r\n }\r\n }\r\n}It’s not surprising it failed. The connector has a limited set of scopes:\n\nThat is to say, it probably doesn’t have the Reports.Read.All delegated permission assigned to it, so even though the account can do it the connector can’t. Bummer.\nI could go the route of an HTTP connector, and I did spend some time fooling around with that, but eventually gave up. Using an HTTP connector has the draw back that I need to send the username and password as part of the password flow. There are options to authenticate via OAuth 2.0 with the HTTP connector but I couldn’t quite figure out how to make it work with an App Registration I created that gives the delegated Reports.Read.All permissions. Could be that I was just being thick when figuring it out.\nAnyways, using a custom connector is more fun. And reusable. Plus I can setup the connector for others in their environment, without handing over the secret to others. Way more nifty in my opinion.\nSo here’s what I did.\nFirst, I created an App Registration. Just a standard one, but I added the delegated Reports.Read.All permission to it and did an admin consent.\n\nThen I created a secret, and noted that.\nNext, I went to Power Automate and created a custom connector.\nGo to Data > Custom Connectors > New custom connector.\n\nCreate from blank. Give it a name.\n\nAnd description. And change host to graph.microsoft.com.\n\nThen go to the next page, Security.\nI want OAuth 2.0 and Azure AD.\n\nFill in the rest of the details from the App Registration. I left the Tenant ID as common, but filled in the resource URL as https://graph.microsoft.com (fill that exactly; I know this experience).\n\nClick “Create Connector”.\n\nThat should generate a Redirect URI.\n\nI added that to the App Registration.\n\nGo to the next section, Definition.\nAdd an action.\n\nUpdate: Later, I changed the above to be like this:\n\nThat’s because I realized there’s no way to take an input of the number of days, so I might as well create separate ones.\nAnd a request, for that action.\n\nThe URL is https://graph.microsoft.com/v1.0/reports/getOffice365GroupsActivityDetail\nUpdate: Later, I changed the URL to be https://graph.microsoft.com/v1.0/reports/getOffice365GroupsActivityDetail(period='D90') to match the updated name above\nClick on Response, add a default response,\u00a0 and I added the following:Content-Type: text/plain\r\nLocation: https://reports.office.com/data/download/JDFKdf2_eJXKS034dbc7e0t__XDeI got this from the API documentation.\n\nThis one’s a bit odd in that the response is the headers.\nAnd that’s it, save/ update the connector.\nNow in my Power Automate, I can call this custom connector.\n\nAfter selecting I can see the action I created.\n\nSave, and test/ run it.\nOn the face of it, it throws an error.\n\nBut that’s misleading, because the headers show the content I want:\n\nSo I create a Compose action after this connector, and use the Location as input.\n\nAnd set it to run even after the connector has failed.\n\nRun it now, and the flow succeeds. \n\nI downloaded the JSON and added more actions to it.{\r\n \"swagger\": \"2.0\",\r\n \"info\": {\r\n \"title\": \"Graph API - Reports\",\r\n \"description\": \"Gets reports from Graph API\",\r\n \"version\": \"1.0\"\r\n },\r\n \"host\": \"graph.microsoft.com\",\r\n \"basePath\": \"/\",\r\n \"schemes\": [\r\n \"https\"\r\n ],\r\n \"consumes\": [],\r\n \"produces\": [],\r\n \"paths\": {\r\n \"/v1.0/reports/getOffice365GroupsActivityDetail(period='D7')\": {\r\n \"get\": {\r\n \"responses\": {\r\n \"default\": {\r\n \"description\": \"default\",\r\n \"schema\": {\r\n \"type\": \"string\"\r\n },\r\n \"headers\": {\r\n \"Content-Type\": {\r\n \"description\": \"Content-Type\",\r\n \"type\": \"string\"\r\n },\r\n \"Location\": {\r\n \"description\": \"Location\",\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n },\r\n \"summary\": \"Get O365 Groups Activity Details - 7 days\",\r\n \"description\": \"Get O365 Groups Activity Details - 7 days\",\r\n \"operationId\": \"GetOffice365GroupsActivityDetail7\",\r\n \"parameters\": []\r\n }\r\n },\r\n \"/v1.0/reports/getOffice365GroupsActivityDetail(period='D30')\": {\r\n \"get\": {\r\n \"responses\": {\r\n \"default\": {\r\n \"description\": \"default\",\r\n \"schema\": {\r\n \"type\": \"string\"\r\n },\r\n \"headers\": {\r\n \"Content-Type\": {\r\n \"description\": \"Content-Type\",\r\n \"type\": \"string\"\r\n },\r\n \"Location\": {\r\n \"description\": \"Location\",\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n },\r\n \"summary\": \"Get O365 Groups Activity Details - 30 days\",\r\n \"description\": \"Get O365 Groups Activity Details - 30 days\",\r\n \"operationId\": \"GetOffice365GroupsActivityDetail30\",\r\n \"parameters\": []\r\n }\r\n },\r\n \"/v1.0/reports/getOffice365GroupsActivityDetail(period='D90')\": {\r\n \"get\": {\r\n \"responses\": {\r\n \"default\": {\r\n \"description\": \"default\",\r\n \"schema\": {},\r\n \"headers\": {\r\n \"Content-Type\": {\r\n \"description\": \"Content-Type\",\r\n \"type\": \"string\"\r\n },\r\n \"Location\": {\r\n \"description\": \"Location\",\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n },\r\n \"summary\": \"Get O365 Groups Activity Details - 90 days\",\r\n \"description\": \"Get O365 Groups Activity Details - 90 days\",\r\n \"operationId\": \"GetOffice365GroupsActivityDetail90\",\r\n \"parameters\": []\r\n }\r\n },\r\n \"/v1.0/reports/getOffice365GroupsActivityDetail(period='D180')\": {\r\n \"get\": {\r\n \"responses\": {\r\n \"default\": {\r\n \"description\": \"default\",\r\n \"schema\": {\r\n \"type\": \"string\"\r\n },\r\n \"headers\": {\r\n \"Content-Type\": {\r\n \"description\": \"Content-Type\",\r\n \"type\": \"string\"\r\n },\r\n \"Location\": {\r\n \"description\": \"Location\",\r\n \"type\": \"string\"\r\n }\r\n }\r\n }\r\n },\r\n \"summary\": \"Get O365 Groups Activity Details - 180 days\",\r\n \"description\": \"Get O365 Groups Activity Details - 180 days\",\r\n \"operationId\": \"GetOffice365GroupsActivityDetail180\",\r\n \"parameters\": []\r\n }\r\n }\r\n },\r\n \"definitions\": {},\r\n \"parameters\": {},\r\n \"responses\": {},\r\n \"securityDefinitions\": {\r\n \"oauth2-auth\": {\r\n \"type\": \"oauth2\",\r\n \"flow\": \"accessCode\",\r\n \"authorizationUrl\": \"https://login.microsoftonline.com/common/oauth2/authorize\",\r\n \"tokenUrl\": \"https://login.windows.net/common/oauth2/authorize\",\r\n \"scopes\": {\r\n \"Reports.Read.All\": \"Reports.Read.All\"\r\n }\r\n }\r\n },\r\n \"security\": [\r\n {\r\n \"oauth2-auth\": [\r\n \"Reports.Read.All\"\r\n ]\r\n }\r\n ],\r\n \"tags\": []\r\n}These are all the valid “period” options for that call. I wish there was some way to take an input for GET operations. There is, if I switch to POST, but that’s not what I need.\nUpdate: Later in the day, when Googling on something else, I came across this blog post from Microsoft. It’s a good one. And worth noting this point:\nCustom connectors are supported by Microsoft Azure API Management infrastructure. When a connection to the underlying API is created, the API Management gateway stores the API credentials or tokens, depending on the type of authentication used, on a per-connection basis in a token store. This solution enables authentication at the connection level.", "date_published": "2023-08-04T16:00:50+01:00", "date_modified": "2023-08-04T18:49:21+01:00", "authors": [ { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" } ], "author": { "name": "rakhesh", "url": "https://rakhesh.com/author/rakhesh/", "avatar": "https://secure.gravatar.com/avatar/da9ad5e212b2f9d4fc6fc74828fba4f5?s=512&d=retro&r=g" }, "tags": [ "authentication", "azure ad", "http", "logic app", "power automate", "Azure, Azure AD, Graph, M365" ] } ] }