Dataflex Pro T-SQL Performance

I wrote a post a while ago looking at the performance of the T-SQL endpoint compared to using FetchXML. Now I’ve got a better understanding of how it works, I wanted to check if the same performance considerations I’ve previously encountered when working with the filtered views on-premise still apply.

For my testing I have set up an environment with 50,000 account records and timed running a number of queries. I timed each query within the SQL Server by running:

DECLARE @starttime datetime = CURRENT_TIMESTAMP
-- Query here
SELECT DATEDIFF(ms, @starttime, @endtime)

This gave me the results of the query and the time it took to execute, in milliseconds.

Large Datasets

My first test was the simple query:

SELECT * FROM account

With 50,000 accounts, and accounts having a large number of columns by default, I expected this to take a while. I didn’t, however, expect to receive the error:

The request channel timed out while waiting for a reply after 00:02:00. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.

This is the standard error generated by a timeout from the SDK. While my earlier testing indicated there isn’t a record count limit imposed by the T-SQL endpoint, there is a 2 minute execution time limit, and this appears to have exceeded it.

More Rows, Fewer Columns

Of course, SELECT * FROM account is a very lazy way to write a query – you should only ever select as many columns as you actually need. Next up I tried:

SELECT name FROM account

This ran in a much healthier 117 milliseconds, although the result took a total of 4 seconds to transfer to my PC. Phew!

Date Columns

One major performance problem I’ve encountered previously with the on-premise filtered views is around date/time columns. These are converted by the view to the time zone of the user running the query, which makes things nice and consistent when you are displaying the data in a report. However, the underlying data is always stored in UTC, and the conversion at runtime to the user’s local timezone comes at a cost. It’s implemented via a function, which therefore needs to be executed separately for each date/time value being considered by the query.

To test the effect of this I ran:

SELECT   DATEPART(day, createdon),
FROM     account
GROUP BY DATEPART(day, createdon)

This took 21,013 milliseconds – almost 200 times longer than selecting a simple string field!

Note that this performance hit is incurred whenever a date/time value is required for the query to be processed, not just when they are present in the results returned to the user. For example:

FROM   account
WHERE  createdon >= '2020-01-01'

would need to calculate the user-local version of the createdon date for every account in order to determine which records to return.

UTC Columns

For each date/time column there is a corresponding UTC column, e.g. createdonutc. This maps through directly to the column in the underlying table without any further calculation. If you can use the UTC version in your application, this has a major impact on performance. Modifying the example from above:

SELECT   DATEPART(day, createdonutc),
FROM     account
GROUP BY DATEPART(day, createdonutc)

This only took 110 milliseconds, compared to the 21,013 milliseconds the user-local version took above – almost a 200x improvement! Definitely one to watch out for! If you are filtering by a date/time field using a user-local timezone value, consider converting that value to UTC and filtering on the corresponding UTC field instead for an easy performance boost. If you’re displaying date/time values back to the user you still need to convert them to the correct time zone at some point of course, but you may well find this more efficient to do in your own code than using the conversion supplied by SQL.

Displaying latest records

A common pattern for reports is to show records that have been created or modified in the last day/week/month. As we’ve seen above, using a simple query like:

SELECT   name,
FROM     account
WHERE    createdon >= DATEADD(d, -1, GETDATE())
ORDER BY createdon

will give some major performance problems. Tweaking this to:

SELECT   name,
FROM     account
WHERE    createdonutc >= DATEADD(d, -1, GETUTCDATE())
ORDER BY createdonutc

will give the same results (possibly with some minor edge case differences around daylight saving time changes) much, much faster.

Although this second query still includes the local timezone version of the createdon date, all the filtering and sorting is done on the UTC version. This means that the timezone conversion code only needs to run on the matches records, not the entire table.

Optionset Columns

Given the performance hit on date/time columns, I thought I’d also check the difference betwen the integer and label versions of an optionset column. Take this query for example:

SELECT   industrycode,
FROM     account
GROUP BY industrycode

This produces the underlying value for each industry code, and runs in 104 milliseconds.

Now to get the corresponding label for each industry, in the current user’s language, this needs to change to:

SELECT   industrycodename,
FROM     account
GROUP BY industrycodename

This version runs in 170 milliseconds. Noticeably slower than the previous version, but nowhere near the performance hit incurred by timezone conversions.

MSDyn365 Internals: T-SQL Endpoint

Disclaimer: the details in this post are based on the best guesses I’ve been able to make about how this works, which are derived from the behaviour I’ve seen while using it. I don’t have any official information on any of this, so use at your own risk!

As you might have noticed from my earlier posts, and my D365UG session on the topic, I’ve been a bit excited recently about the preview of the T-SQL endpoint in CDS Dataflex Pro.

I’ve done my best to poke and prod at it, and I think I’ve got a pretty good idea of how it works. I’ve outlined this below, as the more I understand how something works internally, the better I can decide how to use it. If you work the same way, great!


This is my understanding of the overall process of executing a SQL query.

  1. The user authenticates against the endpoint and executes the query
  2. The SQL endpoint connects to Dataflex Pro, passing through the OAuth token provided by the user for authentication. It then executes an ExecutePowerBISql request, supplying the SQL text from the user
  3. Dataflex Pro processes the request and passes it to the database to execute
  4. The database returns the result of the query to Dataflex Pro
  5. Dataflex Pro stores the result in the ExecutePowerBISql response and passes it back to the SQL endpoint
  6. The SQL endpoint unpacks the data from the response and passes it back to the client

While the SQL endpoint itself is fascinating, I’m particularly interested in what’s going on inside Dataflex Pro. How does it process the ExecutePowerBISql request? How is the query ultimately executed by SQL Server?

By understanding those two systems we’ll be better able to craft performant queries and anticipate and resolve errors.

SQL Database and Filtered Views

Starting from the bottom layer, my understanding is that the query is ultimately executed against filtered views. These may be familiar to those with memories of writing reports for on-premise deployments, but a quick history lesson for those more fortunate:

In on-premise Dynamics 365, you are able to write SSRS reports that can execute directly against the SQL database, rather than going via FetchXML. Dynamics 365 automatically maintains a filtered view, e.g. FilteredAccount, FilteredContact etc., for each entity type. These views provide an abstraction over the top of the raw tables:

  • row- and field-level security is automatically applied based on the current user
  • optionset attributes have both the underlying integer value and the corresponding label available as separate fields
  • date attributes have the value available in both UTC and the user-local timezone
  • lookup attributes have the GUID and display name of the related record available

If you have access to an on-premise deployment, you can look at the definition for these views and see just how much work SQL Server is doing to provide all this value. In our test instance for example, the view definition for FilteredAccount stretches over 1,938 lines!

Query Rewriting

So, if the SQL Database has a view called FilteredAccount, how can I write the query SELECT * FROM account and expect it to work? In fact, the on-premise database (and presumably the online version too) has another view called account that does not have all the extra functionality I’ve outlined above.

I therefore believe the Dataflex Pro step of executing the ExecutePowerBISql request does some amount of rewriting of the query, so what gets executed by SQL Server isn’t exactly what you wrote. You can see this in some edge cases where the process fails in some unexpected ways.

This step will also likely do some sanitisation of the query to ensure you aren’t trying to do anything naughty. This is currently a read-only system, so it may well be this step that ensures you aren’t writing an INSERT statement for example.

So how does that query rewriting look, and why is it important? Take a look at some examples:

SELECT * FROM accountSELECT * FROM FilteredAccount
SELECT name, fullname FROM account INNER JOIN contact ON accountid = parentcustomeridSELECT name, fullname FROM FilteredAccount INNER JOIN FilteredContact ON accountid = parentcustomerid
SELECT, contact.fullname FROM account INNER JOIN contact ON account.accountid = contact.parentcustomeridSELECT, contact.fullname FROM FilteredAccount INNER JOIN FilteredContact ON account.accountid = contact.parentcustomerid

Looks simple enough, but notice the last query. The original query was fine, but after rewriting the name account and contact are now invalid and the query results in an error. Hopefully in future the rewriting will take care of this and automatically apply an alias to match the original name. In the meantime though, you need to do this manually:

FROM   account AS account
       contact AS contact
       ON account.accountid = contact.parentcustomerid;

You can also see how the rewriting works by running the query:

FROM   account
       ON FilteredAccount.accountid = FilteredContact.parentcustomerid;

This query works correctly, indicating that the account and contact table names have been changed to FilteredAccount and FilteredContact respectively. However, the sanitisation step prevents you from running the final query directly:

FROM   FilterdAccount
       ON FilteredAccount.accountid = FilteredContact.parentcustomerid;

This generates the error Invalid table name FilteredAccount. This appears to be done to ensure you can only access tables that correspond to standard Dataflex Pro entities, and not any other tables that are maintained in the database for internal use and do not have the same automatic security restrictions in place as the filtered views.

ExecutePowerBISql Request

As for all other Dataflex Pro functionality, the SQL execution is exposed as a pair of request and response messages. The SQL endpoint appears to be acting as a proxy between the TDS protocol, used by SQL Server, and the Dataflex Pro SDK to execute this message.

However, there is nothing to prevent any other SDK client, including your own code, from executing this request. Take the following code for example:

var request = new OrganizationRequest("ExecutePowerBISql")
  Parameters = new ParameterCollection
    ["QueryText"] = "select name from account"

var response = svc.Execute(request);
var dataset = (DataSet) response.Results["Records"];

From this point the data is available in a standard ADO.NET DataSet object, which your code can display in a grid, iterate over etc.

For this to work, the TDS Endpoint must still be enabled or you will receive an error. It appears that this feature switch is applied at the Dataflex Pro level rather than within the SQL Endpoint proxy software itself.

PCF Dataset Control – Lookup Values

After getting over my initial authentication problems, my next adventure with PCF was to show some details from a lookup field in a dataset-bound control.

With a dataset control, the manifest file can contain mappings for the attributes in the dataset that will be shown in the control, e.g.

  <control namespace="MyNameSpace" constructor="MyDatasetControl" version="1.0.0" control-type="dataset">
    <data-set name="dataSet">
      <property-set name="textField" of-type="SingleLine.Text" usage="bound" />

This would allow the control to be configured to show the details from any text field in the dataset. In my case though, I wanted to show the details of a lookup field (specifically the ownerid field). Unfortunately though, the Lookup.* attribute types are not supported, and produce an error during building the control.

As I only actually needed the name of the owner, not the ID, I thought I’d create a calculated field on the entity to take the name of the owning user. However, while calculated fields can take values from related records, the owner details seem to be special and not available.

Next I considered using a Flow to copy the owner’s name into a separate field, but this seemed like overkill. There had to be a better way.

As this was my first dataset-bound PCF control, I assumed that all the columns it was going to use had to be listed in the manifest. However, I’d been over-thinking it. I found that it also has access to all the columns in the view it’s bound to. So as long as the view had the ownerid field I needed, all the data would be available to me anyway.

Rather than hard-code the ownerid field name though, I’d still prefer to make it configurable. To that end I added a simple input type text field so I could enter the field name in the control configuration.

Final Code

After a lot of trial and error trying to find a solution, I ended up with (abbreviated):


  <control namespace="MyNameSpace" constructor="MyDatasetControl" version="1.0.0" display-name-key="DemoPcf_Display" description-key="DemoPcf_Description" control-type="standard">
    <data-set name="dataSet" display-name-key="Dataset_Display_Key" />
    <property-set name="fieldName" display-name-key="fieldName_Display" description-key="fieldName_Description" of-type="SingleLine.Text" usage="input" required="true" />


public init(
    context: ComponentFramework.Context<IInputs>,
    notifyOutputChanged: () => void,
    state: ComponentFramework.Dictionary,
    container: HTMLDivElement,
  ): void {
  this._fieldName = context.parameters.fieldName;

public updateView(context: ComponentFramework.Context<IInputs>): void {
  const values = => {
    const record = context.parameters.dataSet.records[id];
    const id = record.getRecordId();
    const lookupName = record.getFormattedValue(_fieldName);
    const lookupValue = <string>record.getValue(_fieldName);

    const value = new Record(); = id;
    value.user = new User();
    value.user.userId = lookupValue;
    value.user.text = lookupName;

    return value;

  (this.viewModel.values as IObservableArray).replace(values);

After adding this to the form I can type in the column I want to show the value from. So long as that column is in the view, the value will be shown correctly.

It’s not perfect, as there’s no validation on the name of the attribute you can type in, or that it exists in the view. Unfortunately, without the ability to create a bound lookup field, I think this is the best we currently have.

I’m really not sure why we can’t bind to lookup fields properly, especially as we can use them when they’re in the view. Hopefully this will come in time 🤞

PCF NuGet authentication error

After working with PCF during the initial private preview, I haven’t had a chance to use it in anger. Until today. And it didn’t go smoothly.

It started well. I installed the CLI tooling, got a new template control ready to go with pac pcf init and connected to CDS with pac auth create. Next I tried to publish the control with pac pcf push when it failed.

The build log indicated a 401 Unauthorized error with an internal NuGet feed.

For context, I mostly do C# development using Azure DevOps to store internal NuGet packages. I use this every day in Visual Studio without problems. In this case though, I’m don’t even need to use a package from that feed, but something’s trying to connect to it anyway.

If I remove the internal feed from the list in NuGet.config, it works fine. But this breaks my work in Visual Studio, so that’s not really an option.

So I went back to basics and followed through the process of connecting to the Azure DevOps feed. The first step was to install the tooling, but that was already installed. The next was to run dotnet restore --interactive. Doing this from the folder containing my PCF project prompted me to enter an MFA code, then proceeded successfully.

After this my pac pcf push commands worked correctly! A frustrating way to spend an hour or so today, but hopefully this will help someone out.

SQL 4 CDS 2.2.0 released

I’m pleased to release SQL 4 CDS 2.2.0 with two main improvements:

Column Comparisons

As I mentioned in a separate blog post, FetchXML now supports a new method of comparing values across columns. SQL 4 CDS has been able to execute queries such as these for some time, but needed to do the work itself. It can now execute the queries more efficiently for online instances that support the new functionality.

For example, the SQL query:

SELECT count(*)
FROM   contact
WHERE  firstname = lastname;

used to produce this FetchXML query:

<fetch xmlns:generator="MarkMpn.SQL4CDS">
  <entity name="contact">
    <attribute name="firstname" />
    <attribute name="lastname" />
    <attribute name="contactid" />

SQL 4 CDS would then retrieve each contact, check if the first and last names matched, and work out the count itself.

This update can now convert this query into the native FetchXML format:

<fetch xmlns:generator="MarkMpn.SQL4CDS" aggregate="true">
  <entity name="contact">
    <attribute name="contactid" alias="contactid_count" aggregate="count" />
      <condition attribute="firstname" operator="eq" valueof="lastname" />

which will execute much faster.

Bulk Updates with Joins

The ability to update records is a major feature of SQL 4 CDS. You can pull data from related records as part of the update using joins. This update fixes a bug that would cause an error when using the primary key of one entity during an update such as:

SET    primarycontactid = c.contactid
FROM   account AS a
       contact AS c
       ON a.accountid = c.parentcustomerid;

Update today from the XrmToolBox tool store to start taking advantage of these improvements!

Column Comparisons in FetchXML

I was very pleased today to see a new feature in FetchXML – column comparisons!

This allows us to build queries that compares the values in one column against those in another. Previously we’ve only been able to compare a column against a constant value.

So how does this look? The <condition> element has been extended to have a new valueof element:

  <entity name="contact">
    <attribute name="contactid" />

      <condition attribute="firstname" operator="eq" valueof="lastname" />

This will return the IDs of any contacts that have the same value in the first and last name fields.

What can/can’t it do?

You can use this new option with the most common condition operators:

  • equal
  • not equal
  • less than
  • less than or equal
  • greater than
  • greater than or equal

The two columns you’re comparing must be of the same type.

You can’t compare columns across entities, so these queries won’t work:

  <entity name='account'>
    <link-entity name='contact' from='parentcustomerid' to='accountid'>

      <condition attribute='name' operator='eq' valueof='contact.firstname' />

  <entity name='account'>
    <link-entity name='contact' from='parentcustomerid' to='accountid'>

      <condition entity='contact' attribute='firstname' operator='eq' valueof='lastname' />

If you do need to do this sort of thing, you can instead add a calculated field to one entity to take the value from the related entity, and then apply the filter to the calculated field instead. Alex Shlega has got some good details on this on his blog.

Why use it?

I find this particularly useful to identify some common data quality problems. As in the first example I showed, being able to find records with the same value repeated across multiple fields is a simple way to find a common way users “work around” required fields.

Previously we’ve had to do this by retrieving all the records and then comparing the values ourselves. To test the difference this approach can make I’ve built a simple test app:

  1. I’ve got a test instance with around 28,000 contacts, of which about 300 have the same first name as last name
  2. I connect to this instance using the SDK and run two equivalent processes:
    1. Retrieve all the contacts including the firstname and lastname fields, compare the names and count how many are the same
    2. Do the same again but filter the records with the new operator instead of in code
    3. As I’m just counting the records in this test, use an aggregate query to just return the count of records while filtering with the new operator

In each case I’ve made sure I load in all the pages of the results.

The performance comparisons are quite impressive:

Filtering in code8.9 sec
Filtering in FetchXML0.3 sec
Aggregate with FetchXML filter0.1 sec

The performance difference you’ll see will depend on the ratio of the records you’re interested in to those you’d have to retrieve otherwise, but based on these performance numbers alone I’d advise you to take a look!

How to use it?

Currently this feature isn’t exposed in the Advanced Find interface, but I hope that’s coming soon. In the meantime you can use this in your plugins, Flows etc.

I’ve also updated SQL 4 CDS to use this feature where possible – from version 2.2.0 it will start taking advantage of this feature, there’s nothing you need to change in your SQL queries or settings.

A FetchXML Builder update is also in the works.

Final Thoughts

FetchXML has had a few updates over the years to add extra operators to handle hierarchical data and some extra date filtering options, but this is probably the first major change to FetchXML since the aggregate query options were introduced and extended from CRM 3 to 2013.

There are still limitations to how this works, but I hope this is the start of a new period of investment in this area. With this change, and the current preview of the TDS Endpoint, the future for CDS data access is looking very bright.

D365 Posts Bot Released!

I’m very pleased to release D365 Posts Bot today!

This is the finished article from my earlier series of blog posts on creating a bot to link posts from D365 through to Teams and back again.

You can now either import my ready-made bot into your environment by following the quick-start guide, or you can take the open source code and host your own custom version.

Please give it a try and let me know how you get on!

Creating a bot pt. 8 – Handling Replies

Now we’ve got our bot sending out notifications, we want to be able to handle a reply from the user and add it back into D365 as a new post.

The adaptive card notification lets the user reply via an embedded form. This makes it nice and neat. When the user fills in this form and clicks Reply, three bits of information are sent back to the bot:

  • the ID of the post that the reply is for
  • the domain name of the CDS instance the post is in
  • the text of the reply from the user

This gives us everything we need to create a reply to the post back in CDS so it’s visible to everyone.

These values are available from the turnContext.Activity.Value object using the names defined in the adaptive card:

protected override async Task OnMessageActivityAsync(ITurnContext<IMessageActivity> turnContext, CancellationToken cancellationToken)
    dynamic val = turnContext.Activity.Value;

    var message = val.comment;
    var postId = val.PostId;
    var domainName = val.DomainName;

Saving the reply

The first thing we need to do is connect to CDS. We already did this earlier to figure out who to send notifications to. One slight difference here is that we want to create the post as the user sending the reply, not as our application user.

We need to translate the user ID that the bot framework gives us into the one used by CDS. We can’t translate directly between the two, but we can go via the User Principal Name:

var member = await TeamsInfo.GetMemberAsync(turnContext, turnContext.Activity.From.Id, cancellationToken);
var username = member.UserPrincipalName;

using (var svc = new CdsServiceClient(new Uri("https://" + domainName), _config.GetValue<string>("MicrosoftAppId"), _config.GetValue<string>("MicrosoftAppPassword"), true, null))
    var qry = new QueryByAttribute("systemuser");
    qry.AddAttributeValue("domainname", username);
    qry.ColumnSet = new ColumnSet("systemuserid");
    var user = svc.RetrieveMultiple(qry).Entities[0];

    svc.CallerId = user.Id;

    svc.Create(new Entity("postreply")
        ["postid"] = new EntityReference("post", postId),
        ["text"] = message

Direct Replies

That’s fine when the user replies with the form inside the adaptive card. But what if they just send a new message back to the bot? How do we figure out what they’re replying to?

One way would be to use the dialogs feature of the bot framework. Rather than just sending individual messages to the user for each notification, this would register more of a conversation that’s expecting replies of some format to move the conversation forward.

This seems like a lot of extra management for what we’re trying to achieve here. What we really need is to know what the last message was that we sent to each user, so when they reply we have some context for it.

We’re already saving some user information in Azure Table Storage, so we can just add a couple of extra columns to that table. If we store the ID of the last post that we notified the user of and the domain name of the CDS instance that it came from, we’ll have all the same context information available as if they’d used the adaptive card form.

Wrapping Up

We’ve now got a system that handles everything I set out to do originally:

  • get notifications of new posts & replies from CDS
  • send push notifications out to users in Teams that are connected to each post
  • handle replies to those notifications and push them back to CDS

If someone is interested in the reply, this process will start over again and they’ll get a notification of the reply, and so it goes on. Users can then have a conversation in Teams, where they’re already spending much of their day, but with all the details being logged against the relevant record in D365 for posterity.

Dynamic Form Selection

Occasionally we might have an entity with multiple “types” of some sort, and each type needs a different form to show the details.

Luckily we have the formSelector API to do this. Taking a simple example where the name of the form to show is stored directly in an attribute you can use a script like this to automatically switch the form as the record is opened or when the attribute is changed:

markcarrington = {
  dev: {
    formSelector: function(executionContext, attr) {
      var formContext = executionContext.getFormContext();
      attr = attr || executionContext.getEventSource().getName();

      var newForm = formContext.getAttribute(attr).getValue();

      if (!newForm)

      var currentForm = formContext.ui.formSelector.getCurrentItem();

      if (currentForm.getLabel().toLowerCase() == newForm.toLowerCase())

      formContext.ui.formSelector.items.forEach(function(form, index) {
        if (form.getLabel().toLowerCase() == newForm.toLowerCase())

You can then add this to the form’s OnLoad event, passing through the name of the attribute that holds the required form name:

This event handler needs to pass through the name of the attribute that contains the form name to switch to. As well as the form’s OnLoad event, we can also handle the OnChange event of that same attribute so the form switches immediately after it’s changed. In this case we don’t need to pass in the attribute name as we pick it up from the execution context instead:

To give it a test I’ve set up Form1 and Form2 for accounts. You can see in the clip below that the form changes pretty smoothly as I navigate between records. If it doesn’t recognise the form name it will stick with whichever form it was on last.

Depending on your requirements you may want to change the logic in this script to determine which form to select based on some other criteria. For example, you might have a picklist field to select the type of record rather than the simple text field I’ve used here, so you might need to add some logic to convert between that value and the newForm variable in this script.

Creating a bot pt. 7 – Adaptive Cards

Now I’ve got my bot sending notifications to Teams, but the display of the messages left something to be desired. This is where Adaptive Cards comes in.

Adaptive Cards are a declarative user interface system that you can use across most bot channels. What does this mean? Possibly the simplest way to understand it is to look at the online interactive designer.

With the designer you can drag & drop user interface elements and see a JSON document describing it. You can then switch between different host apps (Teams, web chat etc.) to see how that same card would be rendered in each system. You don’t have to know anything about how each app works, you just describe what you want in your card and it takes care of how to display it.

Building a dynamic card

I need to get some dynamic data into my card before I send it. Ideally I want to send each user we’ve decided are interested:

  • a copy of the new post
  • who wrote the post
  • a link to the record it was posted on
  • some details of why each user is receiving the notification
  • an option to reply to the post

There’s a few ways to build this:

  1. build the JSON string manually for each card
  2. use the card templating SDK
  3. use the object model from the SDK

The templating SDK looked appealing, and is the way demonstrated in the designer. I couldn’t get quite the results I wanted with this though. In the end I built up each card using the SDK object model, which gave me complete freedom to customise each card as I needed.

Sending a card

Once you’ve got a card built, you can send it to the user as an attachment to a message:

var card = new AdaptiveCard { ... };

var activity = new Activity
    Type = ActivityTypes.Message,
    Conversation = new ConversationAccount { Id = ... },
    Attachments = new List<Attachment>
        new Attachment
            ContentType = AdaptiveCard.ContentType,
            Content = card

Validation errors

I had quite a few errors when I tried to send the message, as it does some extra validation of the card structure as it sends it.

In some cases you might get an exception as you try to send the message, and you can get some helpful details from the error message to help fix it.

In other cases the message will send, but the card appears as a blank white rectangle. These were harder to diagnose, but were generally because of some malformed JSON when I was trying other ways of generating the card.

Interactive elements

One of the things I really like about Adaptive Cards is the option to make them interactive. The main bit of information I want to get across in the card is the content of the new post, but I can add buttons (“actions” in Adaptive Card speak) to show extra details if the user is interested. You can also add input elements to get information from the user in a structured way to send back to the bot.

Finished result

From my first basic attempt of sending a simple message:

I’ve now got a prettier card layout:

The card has:

  • a simple link to the record the post was on
  • a link to the user that posted the message
  • the contents of the new post, and the post it’s a reply to

It’s also got three actions that are shown as buttons along the bottom:

  • Sending a reply – this opens an extra area to enter the comment and send it
  • an option to describe to the user why they received the notification

This should help reduce any possible complaints from users of receiving notifications that they are not interested in. The chain shown here is the part I couldn’t generate using the templating SDK. It’s generated from the graph of records that the bot navigated around while it was determining who to send the notification to.

  • a link to open the record that the message was posted on

Generating links

Wherever possible I’ve tried to make everything in the card a link to the D365 record it relates to. To generate the link I’ve combined a few bits of information:

  1. The domain name of the instance that the notification came from – this comes from the web hook notification in the x-ms-dynamics-organization header
  2. The name or code of the entity type
  3. The guid of the record

These can be combined in a URL in the form:




In most cases we’ll have the entity type name (account, contact etc.), but in a few cases (e.g. when parsing a link out of the post body) we have the entity type code (1 for account, 8 for user etc.)

Next steps

I’m quite happy with how this looks now, it presents all the required information in a simple to read way and makes it easy for the user to interact with.

The next stage I’ll look at is handling the responses from the user and pushing them back into D365.