Creating a bot pt. 6 – Posts to Users

Now we’ve got a notification when a new post is added, we need to figure out who to notify about it. This is the process I’ve come up with:

In this example, someone has just written a reply to a post (the Post Reply entity at the bottom). From there we’re going to follow the relationships to find people that might be interested in the new post:

  • Any users explicitly mentioned in the post
  • The author of the post that’s being replied to (and any other replies in the same thread)
  • The owner of the record that the post is written on

That sequence will be the same regardless of the type of record that the post is on.

The key record type for Data8 is the account, as this is what most of our processes revolve around. If the post wasn’t on an account record, we’ll look for any relationships back to an account and repeat.

If any of the owners we’ve found are teams, we’ll expand that out into a list of individual users.

This should give us a good list of users that we should notify automatically. However, there may be cases where a user is particularly interested in a record they’re not linked to. To allow people to subscribe to notifications I’ll also look for users that follow any of the records we’ve found.

We can also use follows to handle users picking up notifications for another user while they are on leave. In this way the user that is covering their work only needs to follow one user record rather than each individual account record.

Connecting to CDS

To run any of this logic, the first thing we need to do is connect to CDS from our bot. We’re running the bot on ASP.NET Core, so we’ll need to use the alpha CDS package Microsoft.Powerplatform.Cds.Client

Because we’re connecting to CDS from an application, we’re going to use server-to-server authentication using the same client ID and secret that we used in part 4 to authenticate to the Microsoft Graph API.

Getting Post Details

Once we’ve connected, we need to get the details of the post that’s just been written. This is the starting point for our wander around the relationships.

Rather than load the new post from CDS, we could just extract it from the request context that is posted to our webhook and save ourselves an extra request. The body of the post is stored in the text attribute. However, for some automated posts, this will be in an XML format rather than plain text. The XML document just references a standard template and gives the values used within it. To get the full text we’re going to need to use the RetrieveRecordWall message which expands it out for us.

Post Mentions

Any records that are mentioned in the post are embedded into the post text, so this post:

is stored as:

Hi @[8,A26CF3A6-F273-EA11-A811-000D3AB398A4,"Mark Carrington"], I've just created this new record for @[1,1AF1DA21-80AE-EA11-A812-000D3AB393DC,"Data8"]

We can parse these out to get the ObjectTypeCode and guid of each mentioned record. We’ll then use these as starting points for finding interested users again.

Related Users

From the record that the post is written on, we find possible users by finding relationships to users or teams. However, I’ll ignore the createdby and modifiedby attributes so that accounts that regularly do bulk imports are not swamped by notifications forever.

Related Accounts

We can also use the same approach to find relationships from our starting entity to account entities. From any account we find we’ll apply the same logic recursively, finding more users and more accounts.


As we’ve run through the process above, we’ve built up a list of records (the starting record, accounts, users and teams). As a final step we’ll find users that follow any of these records and add those users into our list to notify.

Sending the notification

Now we’ve got the details of the message we want to send out (the body of the post that’s been written) and a list of usernames to send it to. When we had a look at adding our bot to users we stored the username and all the related details we needed to send a message. Now we’ll pull those details back out and use our proactive messaging code to send a message out to each user.

Success! I’ve got a message going all the way from D365 to my bot and on to Teams!

Of course, actually formatting the data we’ve retrieved so far into a usable message is a whole other task, and I’ll take a look at that next time.

Show me the code!

I haven’t put any code into this post, simply because there’s a lot of it! It’s all going to be available on my GitHub shortly though.

Some interesting points to note that I found as I was building this though:

  • The sample EchoBot app I’ve been basing my app on is built on .NET Core 2.1. The alpha CDS SDK requires at least .NET Core 3.0. When I added the package reference it didn’t give any error, but it restored the .NET Framework version, which then caused runtime errors when the app started up. I needed to upgrade to .NET Core 3.0 (or 3.1), which triggered some other breaking changes:
  • Using MVC in .NET Core 3.1 requires some minor changes in the setup. It gives compiler warnings to indicate what’s needed though
  • An MVC action can have a JObject parameter in ASP.NET Core 2.1, but this gives a 400 error in ASP.NET Core 3.1. This needed the Microsoft.AspNetCore.Mvc.NewtonsoftJson package

With these changes the original services.AddMvc() call needed to become

    .AddMvc(option => option.EnableEndpointRouting = false)


Of course, I didn’t get all this right first time! In the System Jobs entity in D365 there was an entry for each failed webhook notification which didn’t give too much information, but it did give the HTTP status code which was helpful to diagnose some of the problems I hit. When I hadn’t re-enabled MVC after upgrading to ASP.NET Core 3.1 for example, I got a 404 error as it couldn’t find the expected endpoint. I also got a 400 error when I hadn’t added the NewtonsoftJson package to deserialize the JObject parameter.

For handling other coding errors that resulted in a 500 error however, being able to attach a debugger directly to the app service running in Azure is a fantastic feature. This is such a massive timesaver! It works flawlessly, and so smoothly you could (almost) imagine you were debugging locally. If you haven’t used this feature before I would certainly recommend it.

Creating a bot pt. 5 – Getting notifications from D365

Now we’ve got the infrastructure sorted for the bot talking to Teams and getting it installed for users, we need to start letting it know when something interesting happens in D365 so the bot can send messages out to users. Enter the WebHook.


WebHooks are a simple way to register with D365 that some other web app is interested in an event. We can register a webhook similarly to a plugin step, so we can get D365 to send a message to our bot without writing any code.

I would normally register plugins and their steps using the Plugin Registration Tool in XrmToolBox, but it doesn’t currently support registering webhooks. Instead I had to download the official Microsoft one. Unfortunately, after installing this, I clicked “Register New Web Hook” and nothing happened. In fact, nothing happened when I clicked any of the buttons in the top ribbon. Eventually I downloaded an earlier version ( from NuGet and that worked as expected.

This screen needs the URL that the details of the event are going to be sent to. In this case I’ve made up the /api/notification endpoint within my bot domain. I haven’t written any code to actually handle this yet – that’ll come next.

The other part I need to specify is how D365 will authenticate with the endpoint. I’ve picked the simple option of WebhookKey – this will append a code parameter to the query string and pass in the value I specify. In my code that receives the notification I can check that and reject any requests that don’t include the right value.

WebHook Steps

Now I’ve registered the webhook I can use the Register New Step option to add a step to the webhook. This is the registration step to get CDS to trigger the webhook in response to a particular event.

With this step registered, my bot will get a notification whenever someone writes a post on the timeline of a record. I’ll also repeat this for the postcomment entity to get notifications of any replies.

That’s all I need to do in D365 / CDS – no code needed!

Bot Endpoint

CDS is now going to send some details to my /api/notification endpoint when someone writes a post. Now I need to get my bot to handle it.

The sample code I based my bot on uses ASP.NET Core MVC, so I can easily add a handler for this endpoint as:

public class NotificationController : ControllerBase
    public async Task<IActionResult> PostAsync([FromQuery] string code,  [FromBody] JObject requestContext)
        if (code != "")
            return Unauthorized();

        // TODO: Process notification

        return Ok();

A few things worth pointing out from this code:

  • the code parameter will be loaded from the query string. This should match the WebhookKey authentication value that we put into the webhook registration earlier. If we don’t get the expected value we return a 401 Unauthorized result. The documentation isn’t explicit on what error should be returned, only that the request should fail, but 401 seems a good fit.
  • the requestContext parameter is loaded from the body as a JObject rather than being deserialized to the RemoteExecutionContext object it represents. This is described as a best practise in the documentation. If you’re building your bot on ASP.NET Core 3.x you’ll need to install the Microsoft.AspNetCore.Mvc.NewtonsoftJson package and include
    in your startup.cs.

That’s hopefully the last bit of wiring I need to do, and next time I’ll be able to start using this webhook to push out the proactive Teams messages we’ve already done the groundwork for.

Creating a bot pt. 3 – Adding Users

Last time I proved I could send a proactive message to myself, but I needed my user ID that I could only get after I’d started a conversation with the bot. For my real-life scenario I need to be able to push notifications to users that haven’t interacted with my bot before.

To get this to work I need to install my Teams app for the other users via Microsoft Graph, and for that to work I need to get my (currently private) app listed in my organisation’s app catalogue.

Listing in the App Catalogue

Back in the App Studio app in Teams, I went back to the Manifest Editor and selected my existing app. On the “Test and distribute” tab I clicked Download to get the zip file for my app.

Next, in the Teams admin site, I went to “Teams apps” > “Manage apps”, clicked “Upload” and selected my zip file. A few seconds later my app appeared in the list:

Getting User IDs

When a user installs my app (or I install it for them), it will automatically start a new conversation with my bot and I can grab the user ID and other details I need.

Last time I used the OnMessageActivityAsync to get the ID when I sent a message. I’m going to switch to use the OnMembersAddedAsync method that’s currently being used to trigger the welcome message so I can get the IDs straight away, without having to wait for the user to do anything. I’m going to use this code to get the values I need:

var teamConversationData = turnContext.Activity.GetChannelData<TeamsChannelData>();
var tenantId = teamConversationData.Tenant.Id;
var serviceUrl = turnContext.Activity.ServiceUrl;

foreach (var member in membersAdded)
    if (member.Id != turnContext.Activity.Recipient.Id)
        var userId = member.Id;
        var username = ((TeamsChannelAccount)member).UserPrincipalName;

        // TODO: Store details

As well as the user ID, tenant ID and service URL that we saw we needed before, I’m also pulling out one more bit of information – the user principal name ( This is so I can target the correct Teams user based on the username we have in D365.

Storing User IDs

Now we’ve got all these details, we need to save them somewhere. We’ll need to use them in future to look up a username from D365 and find the related details to trigger a proactive message. Azure Table Storage seems to fit the bill perfectly for this – simple and virtually free.

All I need to do in Azure is create a new storage account and make a note of the connection string that’s generated for it. Put the connection string into the appsettings.json file in the bot so we can access it later.

Back in Visual Studio we need to add the Microsoft.Azure.Cosmos.Table NuGet package. Although we’re not using Cosmos DB, this is now the supported NuGet package for working with the Table API on both Azure Table Storage and Cosmos DB.

With that added we can define the structure of the table we’re going to use:

public class User : TableEntity
    public User(string username) : base(username, "")

    public User()

    public string UserId { get; set; }

    public string TenantId { get; set; }

    public string ServiceUrl { get; set; }

Now I can now store the user details with:

var connectionString = _config.GetConnectionString("Storage");
var storageAccount = CloudStorageAccount.Parse(connectionString);
var tableClient = storageAccount.CreateCloudTableClient();
var table = tableClient.GetTableReference("users");

var user = new User(username)
    UserId = userId,
    TenantId = tenantId,
    ServiceUrl = serviceUrl

var insert = TableOperation.Insert(user);

To test this I uninstalled the app from Teams and re-added it to trigger the OnMembersAddedAsync event again. Unfortunately this didn’t work quite as expected:

After a bit of debugging (I’m not too proud to say that I used the bot equivalent of Console.WriteLine – catching the exception and sending it back to me as a message) I found that the error was coming from this line:

var username = ((TeamsChannelAccount)member).UserPrincipalName;

The member was a ChannelAccount and not a TeamsChannelAccount. To get these Teams-specific extensions I just need to change the base class for my bot from ActivityHandler to TeamsActivityHandler and re-publish it.

With this done I can now uninstall and reinstall the app again to trigger saving my user ID. A quick check in Storage Explorer shows my user details have been stored as expected:

Installing via Microsoft Graph

Because a notification might need to be sent out before a user installs our app manually, we need to push the app out. The recommended (but in-preview) way of doing this is through Microsoft Graph. There’s a sample available to show how to do this, but I haven’t used Microsoft Graph before and I need to do some more work to get this going how I want, so I’ll dig into this further next time.

Creating a bot pt. 4 – Installing for Users

Last time we managed to get everything we need to push messages to a user by having them install our app in Teams, but to make this really seamless it would be better to push the app out to users automatically.

The way to do this is using a (currently in-preview) feature of Microsoft Graph.

I haven’t used Microsoft Graph before, so I’m going to take this from the start.


To use Microsoft Graph I need to be authenticated. It uses Azure Active Directory (AAD), and this gives me two main options. I can either authenticate as:

  • a user
  • an application

I want the process of pushing out my app to be entirely automatic, so authenticating as an application is the natural choice.

Handily, the bot resource is also an application in AAD, and has an associated client ID and secret. I already used them when I added my bot to the Teams app earlier.


At the moment though, the bot application doesn’t have permissions to install apps for users. A quick look at the documentation shows that I need to give it the User.ReadWrite.All permission.

In the Azure Portal I went to Azure Active Directory, then the “App registrations” tab. I selected my bot from the list, then the “API permissions” tab.

This currently shows an empty list, so click the “Add a permission” button and I’m presented with a list of things I can give this app permission to do:

Select the top “Microsoft Graph” option and I’m asked a further question – do I want to give the app “Delegated permissions” (so it can do the operation on behalf of another signed-in user) or “Application permissions” (so it can run without a signed-in user). I want this to run entirely unattended, so I picked “Application permissions”.

This gives me a huge list of all the different permissions that are available in Microsoft Graph. Luckily I only need to grant one, so scroll down to User and select the User.ReadWrite.All one we identified earlier.

All this has told Azure that the application needs this permission, but hasn’t actually given it to it yet.

Back on the list of permissions that have been added to the application there is also a “Grant admin consent for <company>” button. Click this and the application now has the permissions it needs.

Installing the app

We’re now ready to write our custom app to install the Teams app for all our users. This is largely taken from the sample app that the documentation points to, so I’m not going to list out all the code here.


This differs from the sample app as that uses the logged in user permissions instead of an application.

public async Task<string> GetApplicationTokenAsync()
    var cca = ConfidentialClientApplicationBuilder.Create(appId)

    var result = await cca.AcquireTokenForClient(new [] { "" }).ExecuteAsync();
    return result.AccessToken;

The main bits here that weren’t immediately obvious to me when trying to adapt the sample code to use an application user:

  • The ID of the AAD tenant I was authenticating against needed to be specified. This is presumably because an application can be multi-tenant while a user can only authenticate against their own tenant.
  • Instead of specifying the individual permissions I want in the call to AcquireTokenForClient, an application user should specify the constant string to indicate that all the permissions we added to the application earlier should be included in the token

With those hiccups out of the way I’ve now got an access token I can use for calling Microsoft Graph.

Listing Users

The sample app shows getting a list of users and iterating over them to find the ones to install the Teams app for. However, it misses out that the API returns users in pages, similar to RetrieveMultiple in D365, so we need to handle moving on to the next page until we reach the end:

var users = await graph.Users.Request().Select("id,displayName").GetAsync();

while (users != null)
    foreach (var user in users)
        // Check if the app is already installed for this user
        var installs = await HttpGetList<TeamsAppInstallation>(
            $"/users/{user.Id}/teamwork/installedApps?$expand=teamsAppDefinition&$filter=teamsAppDefinition/teamsAppId eq '{appId}'",
            endpoint: graphBetaEndpoint);

        if (installs.Length > 0)

        // App is not already installed, so install it now
        await HttpPost($"/users/{user.Id}/teamwork/installedApps",
            new TeamsAppInstallation()
                AdditionalData = new Dictionary<string, object>
                    ["teamsApp@odata.bind"] = $"{graphBetaEndpoint}/appCatalogs/teamsApps/{appId}"
            endpoint: graphBetaEndpoint);

    // Finished processing all the users in this page, move on to the next page
    if (users.NextPageRequest != null)
        users = await users.NextPageRequest.GetAsync();
        users = null;

Installing the correct app

The last bit that stung me was getting the correct appId to use with this code. I’ve been working the Web App Bot resource for a while now, so I’m used to using that ID, but that was giving me a 404 error with the message:

The definition for app '<guid>' was not found in the org-wide catalog

Finally it struck me that I’m trying to install the Teams app which includes my bot, not the bot itself. When I created the Teams app in App Studio originally it asked for an ID to use, and I just clicked the “Generate” button to create a random guid. Thankfully I can easily find the correct guid either in App Studio by selecting to edit my existing app, or by selecting it from the list in the Teams admin center

User Notification

With that sorted my installer runs through and adds the app for each user. That triggers the OnMembersAddedAsync method in my bot, which saves the required IDs to the Azure Table Storage table and sends the welcome message to the user.

The pieces are starting to come together! We’ve now got the app installed so we can start pushing out notifications to anyone who might need them. I think we’re ready to move on to generating useful notifications based on D365 activity! Stay tuned…

Creating a bot pt. 2 – Proactive Messages

The key thing I want to do with my bot is to push messages out to users when something happens in D365, not just in response to the user sending a message to the bot. This was the part I was unsure of, but a bit of searching gives me the key phrase Proactive Messages. Apparently this is what I need to implement.

Required Information

From a bit of research it seems I need a few IDs to be able to send a proactive message:

  • my bot
  • the user to send the message to
  • the Azure Active Directory tenant the user is in

I already know the ID of my bot, but I need to capture the user and tenant IDs. The user ID is not the user’s login name ( but an apparently random ID assigned by Teams (29:cXh2ZnlWREI2VgpMY3o3UFdGZU9xCnJMUzBScUJqNWIKSlVrZkpaanh0cgpOMEtXVThVSDl)

I can grab this user ID and the tenant ID from the events in my bot. I finally need to write some code!

Editing the bot code

In part 1 I created the bot using the Echo Bot sample, which deployed a fully-built bot for me. Now I need to start customising this bot, so I need to grab that code.

Back in the Azure Portal I just have to open the Web App Bot resource I created earlier, click on the Build item and then on “Download Bot source code”. That gives me a zip file – just unzip it and open the solution inside it in Visual Studio.

The core code that handles the incoming messages in this bot is:

protected override async Task OnMessageActivityAsync(ITurnContext<IMessageActivity> turnContext, CancellationToken cancellationToken)
    var replyText = $"Echo: {turnContext.Activity.Text}";
    await turnContext.SendActivityAsync(MessageFactory.Text(replyText, replyText), cancellationToken);

To get started I just want to quickly prove I can get the required information. I’m going to replace the current “Echo” response with all these IDs:

protected override async Task OnMessageActivityAsync(ITurnContext<IMessageActivity> turnContext, CancellationToken cancellationToken)
    var teamConversationData = turnContext.Activity.GetChannelData<TeamsChannelData>();

    await turnContext.SendActivityAsync(MessageFactory.Text("Your ID is " + turnContext.Activity.From.Id), cancellationToken);

    await turnContext.SendActivityAsync(MessageFactory.Text("My ID is " + turnContext.Activity.Recipient.Id), cancellationToken);

    await turnContext.SendActivityAsync(MessageFactory.Text("Tenant ID is " + teamConversationData.Tenant.Id));

The ID of the user and the bot is available in the standard turnContext parameter, but the tenant ID is specific to the Teams channel so we need to get that out of the TeamsChannelData class. This is a good start:

My bot ID appears to be the same ID I used when I created the Teams app package, but with the prefix 28:. The Azure Active Directory tenant ID is the same ID I’m used to seeing in various situations now, but the user ID appears completely random.

Sending a Proactive Message

Now I’ve got the required details I want to set up a simple console app. This should demonstrate I can send a message before I jump into building anything into D365. Following the docs again and using some hard-coded IDs for now just to get started I built:

async static Task Main(string[] args)
    const string url = "";
    const string appId = "<guid>";
    const string appPassword = "<password>";


    var client = new ConnectorClient(new Uri(url), appId, appPassword);

    // Create or get existing chat conversation with user
    var parameters = new ConversationParameters
        Bot = new ChannelAccount("28:<guid>"),
        Members = new[] { new ChannelAccount("29:<user id copied from earlier chat>") },
        ChannelData = new TeamsChannelData
            Tenant = new TenantInfo("<tenant id copied from earlier chat>"),

    var response = await client.Conversations.CreateConversationAsync(parameters);

    // Construct the message to post to conversation
    var newActivity = new Activity
        Text = "Hello",
        Type = ActivityTypes.Message,
        Conversation = new ConversationAccount
            Id = response.Id

    // Post the message to chat conversation with user
    await client.Conversations.SendToConversationAsync(response.Id, newActivity);

Aaaannnddd… I get the error:

Operation returned an invalid status code 'NotFound'

After some frustrated searching I found that the URL I needed to connect to was not that of my bot itself, but a channel-specific URL provided by the Azure Bot Framework. I found some suggested URLs to use for different channels, but the way I went for was to include this URL in my bot response by adding in this line:

await turnContext.SendActivityAsync(MessageFactory.Text("Service URL is " + turnContext.Activity.ServiceUrl));

This gave me the URL I assume that will change depending on the region you’re in, so it does seem best to get this value dynamically in the same way as the other IDs rather than hard-coding it.

With that hurdle crossed I can now run my console app and get an alert in Teams!

Everything seems to work just how I’d hope at this point – I get the popup alert message with the contents of my message, the Teams app in the task bar is flashing to show I’ve got a message, and it appears in the timeline of my conversation when I open Teams. Great!

This seems to be all I need to get messages sent out to me, but currently I’m skipping a load of complexity by hard-coding my own user IDs. I need to start making this more dynamic so I can send messages to others too, which I’ll look at next time.

Creating a bot pt. 1 – Getting Started

Before I make any progress with the real logic of my new bot, I need to get the basics set up.

I first need to create a simple bot and be able to interact with it from Teams. This is where the whole Azure Bot Framework really shines – I need to know just about nothing about all these different moving parts to get started.

First, in the Azure portal, I created a new resource group to keep everything I’m going to need in one place. Then I click Add Resource and just type in “Bot”. You get quite a few options to choose from:

Creating the Bot

I want to build my own bot rather than starting from one of the third-party ones on offer, so I’m going to pick between these first two options I’ve highlighted. After a quick bit of research, the different options are:

  • Web App Bot hosts the bot within an automatically-created App Service resource as well as registering it with the Azure Bot Framework
  • Bot Channels Registration registers a bot you’ve already created and hosted elsewhere

I’m all in favour of not having to worry about the underlying infrastructure wherever I can, so I’ve gone for the Web App Bot option.

I’ve got to pick a few standard options when I’m creating the bot – all the usual things of what to call it and what region to host it in, a pricing tier (the free one will be fine for now), and, handily, a template for my bot. I’ve gone with the simple Echo bot for now, as I’m not planning on using natural language input from users at least to start with.

After creating the bot I’ve ended up with three new resources in my resource group:

  • App Service Plan to run my bot in
  • App Service to host my bot
  • Web App Bot to register my bot with the Bot Framework

It’s worth noting that although I used the free tier for the bot registration, the App Service Plan it’s created to host it is at the S1 pricing tier, which is going to cost me about £32 per month. Not the end of the world for a real scenario, but I’m only playing for the moment so the next thing I’m going to do is scale that down to the F1 (free) tier.

Quick Test

I haven’t written any code yet but I’d like to see it working before I start to change anything. The simplest way is to head over to the Web App Bot resource and click on the “Test in Web Chat” option. This gives me a quick and easy way to interact with my bot.

As expected, I can type in messages and the sample bot simply echos them back to me:

Great, the sample code is working! Now to link it to Teams.

Integrating with Teams

First off we have to tell the bot what channels we want to work with. These are the end-user messaging apps it’s going to work with. We’ve already seen it working with the testing web-chat channel.

To get started, click on the Channels option within the Web App Bot resource, then on the Teams icon. I’m not using the government cloud so I just leave the default options selected and click Save. Agree to the terms, and I’ve got my bot linked to Teams. Great! Now what?

Creating a Teams App Package

This is where it got confusing for me. In order to use by bot from Teams, apparently I have to create an app package for it. I think it’s the terminology that gets me here – I’m creating a bot, but now I have to create an app instead.

Once I decided to stop worrying about the words and get on with it though, it wasn’t too bad. I used the App Studio app in Teams to create the package with some basic details and add my bot to it. The key bit of information I needed here was the bot id (saved as the MicrosoftAppId in the Configuration section of the Web App Bot resource). All the other bot details were pulled in for me automatically after I entered this, which is very nice!

The other thing was to understand the scope of my bot. When you add the bot to the app package you have to select from Personal, Team or Group Chat scopes. In my case I think everyone is going to have their own personal conversations with the bot so I’m going to leave this on the default – Personal. I assume I can come back and change this later if I need to though.

Installing & Testing

Now I’ve created my app package, I just need to install it. This is where I hit a bit of friction. I expected to be able to use the “Test and distribute” section within App Studio to do this, but I kept getting unhelpful permissions-related errors.

Eventually I found I had to enable an option in the Teams admin centre to let me do this. Under Teams apps > Setup policies, edit the relevant policy and turn on the “Upload custom apps” option.

Now the frustrating part – it still doesn’t work! Teams policies seem to take some time to apply, so come back to it later, go to App Studio and click that Install button again. If everything’s going smoothly your bot should now appear as a new app in the left hand menu, and you can start chatting to it in Teams!

Hooray! I still haven’t written a single line of code, but I’ve got a fully operational bot that I can chat to from Teams. Brilliant!

That’s enough for today, next time I’ll take a look at how I can get my bot to push out messages to Teams rather than just respond when the user sends a message.

My first bot – D365 Notifications

At Data8 we’ve successfully used posts on CRM records for some time to share updates. The problem was that people didn’t know when someone had written a post, so they would resort to email instead.

To avoid this problem I set up a service to automatically email people that might be interested in a record whenever someone posted on it. They could read the new post in their email, and click on a link to take them to the record if they wanted to reply.

Add a post on a record in CRM
An automatic email is sent out with the details to anyone who might be interested


This works OK, but as we’ve now moved to Teams for most communication it seems sensible to move this over as well. A bot that could automatically send people a notification whenever something interesting happens in D365 (new posts in this case, but I could see it being wider in future) seems ideal.

After a quick bit of searching it seems this should be possible with the Azure Bot Framework and Teams. I’ve never written a bot before, though I’ve attended some great sessions by Artur Zielinski that made me think it shouldn’t be too hard to get started.

I’m still investigating exactly how it all works, but the general picture in my head right now is something like:

My general plan is a plugin in D365, which will trigger when a post is added. That will send a notification to the bot, which will create a message to send on to the bot framework. That in turn sends it on to Teams, so the user can see it. That replicates what we currently have via email, but ideally we’ll also get the opposite direction working too. When the user replies to a Teams message it gets passed all the way back to being saved as a reply to the post in D365.

I’ll be working on this over the next few weeks – check back to see how I’m doing!

SQL 4 CDS 2.1.0 – the T-SQL edition

Since I’ve been doing some investigation into the new preview T-SQL endpoint for CDS recently, I’ve decided to integrate it into SQL 4 CDS. Update to 2.1.0 to get:

Enable/Disable T-SQL Endpoint

The current official way to enable or disable the T-SQL endpoint is rather awkward, and doesn’t seem to work for me (though it does for others).

SQL 4 CDS 2.1.0 introduces a new T-SQL Endpoint item in the Object Explorer pane (if your instance is on a high-enough version). Right-click on it to enable or disable it.

Run queries using T-SQL Endpoint

Once enabled you can run SELECT queries using SQL Server Management Studio, but you can now get SQL 4 CDS to use the same endpoint for SELECT queries while still being able to run UPDATE / INSERT / DELETE statements as well.

To enable this option, open the settings dialog and tick the “Use T-SQL Endpoint where possible” option.

Now when you run a SELECT query in an instance where the T-SQL endpoint is enabled, SQL 4 CDS will use it to execute the query. Other queries will continue to work as well as they’ll be sent to the standard CDS API.

For this to work you’ll need to use an OAuth method of authenticating with CDS. You’ll know if you’ve done this as the T-SQL Endpoint item in the Object Explorer pane will show a green icon. If it’s got a yellow warning triangle, you’re using another authentication method and SQL 4 CDS won’t use the T-SQL endpoint.

When you use this new option you can run any SELECT query that the T-SQL endpoint supports, whether SQL 4 CDS can convert it to FetchXML or not, giving you all the flexibility it offers plus the support for INSERT / UPDATE / DELETE that SQL 4 CDS supports.

CDS T-SQL Endpoint pt 8 – Security

The security model is a key advantage of CDS over alternatives such as Sharepoint or SQL. Data is one of any organisation’s most precious assets, and the built-in ability to restrict access to records based on ownership, sharing, teams, hierarchy and even per-field makes this an incredibly powerful feature.

All these security options take effect at the lowest levels of the CDS API, so however you access the data the same rules are automatically applied. So long as you’ve got your security roles set up correctly there should be no difference between whether your users use your Dynamics 365 app, XrmToolBox or even start building their own Power Apps.

Of course, the T-SQL endpoint provides a new way to get at your data, so we need to be sure that the same security rules apply here too.


We’ve already seen that any user needs to authenticate with the T-SQL endpoint with the same details they use for any existing app. It knows who is connected and should be able to apply the same security rules.

Security Roles

Roles are the starting point for CDS data security. They identify the basic set of records that the user can access based on who owns the record. These can be very complex, but for now T-SQL is read-only so we only need to worry about the read permissions.

I’ve created a few account records and assigned them to one user. When I log in as another user and run

SELECT name, accountnumber FROM account

as another user with only user-level read permission, it sees no records. So far so good.

If I now update the security role to give organisation-wide read permission and re-run the query, all the accounts are returned. No need to disconnect & reconnect, all the changes are applied immediately. Great!

Field Level Security

While security roles define which records the user can access, field level security defines which fields within those records they can access. By default every field is readable & writable, but you can lock down specific fields to be read-only or even unreadable to all but specific users or teams.

To test this out I applied field level security to the accountnumber field on the account record. As a system administrator I automatically get full access to any secured fields. My other user account however does not. This time when I run the same query I get:

Results for system administrator

Results from normal user

So field level security rules are also applied as expected, and an unreadable field will be returned as NULL.


Rather than working with individual users, it can be more efficient to work with teams. This allows you to assign permissions to a whole set of users at once. To complicate matters there are two different types of teams – Owner Teams and Access Teams. I won’t go into the differences here, but I will give both of them a go.

Owner Teams

I’ve switched my security roles back to give my normal user access to only records it owns. That user now can’t see my account records any longer. Next I’ve created an owner team, given it the same security role and reassigned one of the accounts to the team.

Now when I run the same query again as my normal user I see that one record. Why? Because I am part of the team, the team owns the record and the team’s security role says it can read accounts it owns.

Access Teams

I’ve now reassigned all my accounts back to my system administrator account, so my normal user can’t see any again. I’ve created an access team and added my normal user to it.

When I share one of the accounts to the access team, my normal user can now see it.


Rather than using teams to give extra users access to a record, you can also share the record directly with a specific user. This doesn’t scale well as it gets time consuming to do for large numbers of records, but works well for odd cases.

Again I removed the team that was giving access to the accounts to my normal user account and ensured it couldn’t see any records, then shared one of the records directly to that user. Again, that record appears in the results as expected.

Hierarchy Security

With hierarchy security, the privileges of a user can change depending on where they are in the organisational hierarchy. For my test I’ve enabled the Manager Hierarchy option, which gives a manager access to the same record as the users that report to them.

I’ve reset everything so my normal user does not have access to any account records. Next I’ve updated the Manager field of my system administrator user to be my normal user, and re-run my query. Immediately both users can now see all the records, as the normal user has inherited access from the system administrator.


So far as I can tell, the T-SQL endpoint honours all the security restrictions that are imposed by the rest of CDS, which should be a big tick for enterprises allowing this to be rolled out. This should enable Power BI reports to finally start replacing the older charts and dashboards that we’ve had to use to ensure the correct security filtering is applied.

CDS T-SQL Endpoint pt 7 – Extensibility

One of the aspects of the CDS platform I’ve liked since Dynamics CRM 4.0 is the extensibility options – the ability to add plugins into the core of the engine so that your custom code will be executed however someone is using it, whether through the standard UI, portals, external SDK app etc.

Although it already has a very flexible security model, you can always add plugins to the Retrieve and RetrieveMultiple messages to further log or filter the data that a user has access to.

If the T-SQL endpoint is a view onto CDS rather than a direct path to the SQL database then these plugins should still be fired. Alas, they’re not. No matter how I formulate the query or configure the plugin step it just doesn’t fire.

This makes some sense – you can’t translate all the possible SQL you might fire at it into any of the existing query types that the plugin might be able to get out of the parameters. I did wonder if there might be a new QueryBase-derived class to represent the SQL queries, but apparently not.

So, beware if Retrieve or RetrieveMultiple plugins form a core part of your security or business logic, as users could use T-SQL to bypass them!

So maybe there’s an entirely different SDK message processing step that I need to register my plugin on instead?

There’s nothing obvious listed when I type into the “Message” field in the plugin registration tool, so I decided to compare the list of available messages from my test instance to an old on-premise system to find what messages have been added. I ran SELECT name FROM sdkmessage against both systems and compared them in Excel. There’s a lot of extra messages online compared to on-premise, but one in particular stood out:


This is almost certainly the one, given that the main purpose of this endpoint was to support running reports in Power BI.

Unfortunately the Plugin Registration Tool doesn’t offer the option of adding a step against this message. I’ve also tried adding one directly via the SDK and it gives the error:

Custom SdkMessageProcessingStep is not allowed on the specified message and entity.

So as far as I can tell the SQL requests do go via the core CDS service, but use a different message than a standard RetrieveMultiple that currently makes it impossible to attach custom plugins to inspect or alter what’s going on.