Inside FetchXML pt 3 – filter

So far we’ve looked at using the <attribute> and <link-entity> elements to control what columns to include in your query, but for a truly useful query you’ll likely want to select which rows to include instead of taking everything. Enter <filter>!

Filter or Condition?

A <filter> is a group of individual <condition>s. Each condition applies a filter on a single field. The job of the filter is to combine those conditions with an AND or OR.

<fetch>
  <entity name="contact">
    <attribute name="fullname" />
    <filter type="and">
      <condition attribute="statecode" operator="eq" value="0" />
      <condition attribute="firstname" operator="eq" value="Mark" />
    </filter>
  </entity>
</fetch>

This query will retrieve the names of all contacts that are active (statecode = 0) and have a first name of Mark.

Nesting Filters

As well as containing conditions, filters can also contain other filters. This lets you create complex criteria by combining AND and OR logic:

<fetch>
  <entity name="contact">
    <attribute name="fullname" />
    <filter type="and">
      <condition attribute="statecode" operator="eq" value="0" />
      <filter type="or">
        <condition attribute="createdon" operator="on-or-after" value="2019-01-01" />
        <condition attribute="createdby" operator="eq" value="8109eeaf-44d8-4a9b-8c7a-95209b26f6e5" />
      </filter>
    </filter>
  </entity>
</fetch>

This will retrieve only contacts that are:

  1. active, AND
  2. at least one of
    1. created no earlier than the start of 2019, OR
    2. created by a specific user

Complex Operators

The most common operator you’ll use will be eq, short for equals. However, there’s a huge range of operators you can use, especially if you’re filtering on date or user fields. I won’t list them all here, but you can find the full list in the Microsoft documentation. The main ones you’re likely to use are:

  • eq
  • ne
  • null
  • not-null
  • like

It’s worth highlighting that the like operator can be used on lookup fields as well as text fields – in this case it is applied to the primary name attribute of the related record, while eq will be applied to the unique identifier stored in the lookup field itself.

For date values there are many operators [olderthan/last/next]-x-[hours/days/weeks/months], e.g. last-x-days. The value you use in the condition becomes the “x”, e.g. to find records created in the last 3 days:

<condition attribute="createdon" operator="last-x-days" value="3" />

For user lookup fields, such as ownerid, there are various operators to check if the user matches the current user (eq-user), or any of the same teams the user is part of (eq-userteams, eq-useroruserteams). It can also look through the user hierarchy to match users below the current user in the user hierarchy (eq-useroruserhierarchy,
eq-useroruserhierarchyandteams). You can also filter by the same business unit as the current user (eq-businessid)

While we’re talking about hierarchies, remember that any other entity type can also have a hierarchy defined, and you can use the above, under and similar operators to match any record in them rather than having to navigate them manually yourself.

There’s some more good information about querying date and hierarchy data in the Microsoft documentation.

Condition Values

All conditions compare a field to a specific value. Importantly, there is no condition to compare one field to another field.

Most conditions require a single value to be specified in the value attribute, such as the equals operator:

<condition attribute="statecode" operator="eq" value="0" />

For some the value is implicit in the operator, so no value is required:

<condition attribute="primarycontactid" operator="null" />

A few can work with more than one value:

<condition attribute="statuscode" operator="in">
  <value>1</value>
  <value>4</value>
  <value>10</value>
</condition>

Conditions on lookup fields require the primary key guid value to filter on. If you build a query in Advanced Find and download the FetchXML you’ll also see the uiname and uitype attributes included in the condition to identify the name and type of the record respectively, but these are not used when the query is actually executed.

Dynamic Values

We saw some of the more advanced operators earlier, especially the date and user operators. These are particularly important when building views for your users to use on a regular basis.

For example, a view that shows active leads owned by the current user created in the last 7 days is probably more useful than one that shows leads created since a fixed date and owned by you, the author of the view.

Filtering Linked Entities

There are two ways filters can be applied to linked entities – within the <link-entity> element, or within the main <entity> element using the entityname attribute on the conditions. Both are valid, but do subtly different things.

When the filter is applied within the <link-entity>, the filter is applied to the entity before it is joined. By adding an entityname attribute to a condition within the main filter, the filter is applied after the join.

When using inner joins there is no practical difference in the results, but with outer joins these approaches can be combined to generate a query to find records that do not have any related records that match some criteria, e.g.:

<fetch>
  <entity name="account">
    <attribute name="name" />
    <link-entity name="contact" to="accountid" from="parentcustomerid" link-type="outer">
      <filter>
        <condition attribute="statecode" operator="eq" value="0" />
        <condition attribute="firstname" operator="eq" value="Mark" />
      </filter>
    </link-entity>
    <filter>
      <condition attribute="contactid" entityname="contact" operator="null" />
    </filter>
  </entity>
</fetch>

This will retrieve all the accounts that do not have an active contact called Mark.

The list of contacts is filtered to only active Marks, and the list of accounts is linked to that filtered list using an outer join. As we looked at in the previous part of this series, the outer join type will include all the records from the first list (the accounts). If there are any matches from the second list (the contacts) then those are included, otherwise, all the contact fields in the result are left blank (null). The outer filter then restricts the final result to only those joined records that have a null contactid, and therefore didn’t find a matching contact.

Combining filters and joins to find records without related records

Note that, if you give the link entity an alias, it is that alias you should use in the entityname attribute, not the actual logical name of the entity.

SQL Equivalents

The main filter is equivalent to the SQL WHERE clause. For example, the first query in this post would be equivalent to:

SELECT fullname
FROM   contact
WHERE  statecode = 0
       AND firstname = 'Mark'

The from and to attributes on a <link-entity> form an ON clause for a join, and a filter within the link entity extend that ON clause. The filter created by the from and to attributes will always be combined to the additional filter with an AND. For example, the query from earlier:

<fetch>
  <entity name="account">
    <attribute name="name" />
    <link-entity name="contact" to="accountid" from="parentcustomerid" alias="contact" link-type="outer">
      <filter>
        <condition attribute="statecode" operator="eq" value="0" />
        <condition attribute="firstname" operator="eq" value="Mark" />
      </filter>
    </link-entity>
    <filter>
      <condition attribute="contactid" entityname="contact" operator="null" />
    </filter>
  </entity>
</fetch>

can be written in SQL as:

SELECT account.name
FROM   account
       LEFT OUTER JOIN
       contact
       ON account.accountid = contact.parentcustomerid
          AND contact.statecode = 0
          AND contact.firstname = 'Mark'
WHERE  contact.contactid IS NULL;

The more advanced operators do not have any direct equivalent in SQL. Many of the date conditions compare values only on the date part and ignore the time part to make it easier for the user to build useful queries, but would require a range comparison in SQL. The hierarchical operators would likely require more complex CTEs, and are a great example of how FetchXML delivers a much more powerful application-level query language rather than just a thin veneer over SQL.

In SQL 4 CDS these operators can be used through the pattern of attribute = operator(value), e.g.

SELECT fullname
FROM   contact
WHERE  ownerid = equseroruserteams()
       OR createdon = lastxweeks(4);
<fetch xmlns:generator="MarkMpn.SQL4CDS">
  <entity name="contact">
    <attribute name="fullname" />
    <filter type="or">
      <condition attribute="ownerid" operator="eq-useroruserteams" />
      <condition attribute="createdon" operator="last-x-weeks" value="4" />
    </filter>
  </entity>
</fetch>

Inside FetchXML pt 2 – link-entity

To combine data from different records in your query you need to use the <link-entity> element. This is equivalent to a join in SQL.

Each <link-entity> needs 3 key pieces of information:

  1. The entity type to link to
  2. The attribute in the main entity to join from
  3. The attribute in the new entity to join to
<fetch>
  <entity name="account">
    <attribute name="name" />
    <link-entity name="contact" to="primarycontactid" from="contactid">
      <attribute name="fullname" />
    </link-entity>
  </entity>
</fetch>

The part that always confuses me is the naming of the to and from attributes on the <link-entity> element – in my head they should be other way around, which is one reason why a tool such as SQL 4 CDS or FetchXML Builder is so useful to me.

Link Attributes

Unlike SQL, which can have very flexible join criteria, FetchXML requires exactly one attribute on the source entity to exactly match one attribute on the target entity.

Although joins would normally be done on attributes that form a relationship, this isn’t a requirement. You can join on any attributes that are compatible, e.g. you could use the following to find leads that have the same email address as a contact:

<fetch>
  <entity name="lead">
    <link-entity name="contact" to="emailaddress1" from="emailaddress1">
      <attribute name="fullname" />
    </link-entity>
    <attribute name="fullname" />
  </entity>
</fetch>

For a great example of putting this type of join into practise, take a look at Jonas Rapp’s blog on creating a view to show contacts in the same state as the current user.

Many-to-Many Relationships

If you have a many-to-many (N:N) relationship you can’t join the entities directly, you have to join via the intermediate “intersect” entity. This is a hidden entity type that is created for you automatically and isn’t available for you to view directly in your app or even look at in Advanced Find, but you need to be aware of the details of these when you are building queries that navigate these relationships.

For example, contacts can be part of many marketing lists, and each marketing list can contain many contacts, so this forms a many-to-many relationship. The contact entity itself can’t hold a list of all the marketing lists it’s part of in a field of its own, as there’s no way for it to store a potentially unlimited number of lists. Similarly, the marketing list entity can’t store all its members directly. Instead, a third intersect entity is used to join the two, and a record of this entity type is created for each contact/marketing list pair:

Contact-Marketing List many-to-many relationship diagram
contactidfullname
1Mark Carrington
2Joe Bloggs
entityidlistid
11
21
22
listidlistname
1Latest Contacts
2Imaginary Contacts

With this data we can see there are two contacts and two marketing lists. The first contact is a member of one list, and the second contact is a member of both.

Querying this relationship to find a list of all contacts that are part of a marketing list, and the names of the lists they are part of, would look like:

<fetch>
  <entity name="contact">
    <link-entity name="listmember" to="contactid" from="entityid">
      <link-entity name="list" to="listid" from="listid">
        <attribute name="listname" />
      </link-entity>
    </link-entity>
    <attribute name="fullname" />
  </entity>
</fetch>

Finding the intersect details

To build this query you need to know the names of the intersect entity and the attributes in it to join on. You can find the intersect entity name from the relationship properties in the customisation screens, for example here in the Maker Portal:

Intersect entity name shown in Maker Portal

The intersect entity is shown here in the “Relationship entity name” field. Unfortunately it doesn’t show the foreign key fields on this entity that link to the two main entities. To find those names you’ll need to use a tool such as SQL 4 CDS or Metadata Explorer.

With SQL 4 CDS, you can use the Object Explorer pane to help add the correct syntax for a join. Double-click on a relationship to add the join into your query. If you select an N:N relationship the join will include all three required entities. For example, the contact to marketing list relationship includes the listmember entity:

Generate many-to-many join in SQL 4 CDS

Link Types

FetchXML supports two link types – inner and outer.

An inner join will include only records where there is a match across both entities. By comparison, outer join will include all the records from the first entity even if there is no match in the second entity.

We can modify the first query to include all accounts even when they don’t have a primary contact by adding the link-type="outer" attribute:

<fetch>
  <entity name="account">
    <attribute name="name" />
    <link-entity name="contact" to="primarycontactid" from="contactid" link-type="outer">
      <attribute name="fullname" />
    </link-entity>
  </entity>
</fetch>

Link Aliases

When a query with a <link-entity> is executed, the attributes retrieved from the link are given a prefix of the entity name, e.g.:

namecontact.fullname
Data8 LtdMark Carrington

As I showed in the post on the <attribute> element, this prefix can be removed by giving the attribute itself an alias.

Another option is to give the <link-entity> an alias, which changes the prefix, e.g.:

<fetch>
  <entity name="account">
    <attribute name="name" />
    <link-entity name="contact" to="primarycontactid" from="contactid" alias="person">
      <attribute name="fullname" />
    </link-entity>
  </entity>
</fetch>
nameperson.fullname
Data8 LtdMark Carrington

Limits

Each query can contain a maximum of 10 links. Hopefully this should be enough for most queries. There is no way (that I know of) to increase this limit, so if you do need more you’ll need to execute multiple separate queries and join the results in your own code.

Because the links are ultimately converted to SQL joins, having more joins than this would likely result in some inefficient queries as the database engine would start hitting query optimizer timeouts, so this seems like a reasonable limit.

SQL Equivalents

The inner and outer link types are equivalent to the INNER JOIN and LEFT OUTER JOIN join types in SQL.

The from and to attributes in the link are equivalent to the ON clause of the join, and the alias attribute is equivalent to the AS table alias clause.

However, the FetchXML syntax is more restrictive than SQL as you can only use a single exact field match for the join criteria, rather than the complex criteria that you can use in a SQL ON clause (although it is possible to extend this a little further using a <filter> element which I’ll look at next time). Aliases can also only be applied to a <link-entity> and not the root <entity>.

In SQL 4 CDS, the first entity you reference in the FROM clause will be used as the main <entity> and any following joins will become <link-entity> elements. Although you can apply table aliases at any point, they will only be reflected in the generated FetchXML in the <link-entity> elements, e.g.:

SELECT a.name,
       c.fullname
FROM   account AS a
       LEFT OUTER JOIN
       contact AS c
       ON a.primarycontactid = c.contactid;

is translated to:

<fetch xmlns:generator="MarkMpn.SQL4CDS">
  <entity name="account">
    <link-entity name="contact" to="primarycontactid" from="contactid" alias="c" link-type="outer">
      <attribute name="fullname" />
    </link-entity>
    <attribute name="name" />
  </entity>
</fetch>

Next time I’ll take a look at the <filter> element and particularly see how that can be combined with <link-entity>.

Inside FetchXML pt 1 – attribute

As promised, I’m starting a series of blog posts covering various aspects of FetchXML and the humble <attribute> element seems like a good place to start.

Simple SQL and FetchXML query retrieving attributes

The purpose of running a FetchXML query is to get some data back, and the <attribute> element defines which attributes get included in the result.

You can add as many <attribute> elements to your query as you need, and you can add them to your main <entity> and any <link-entity> elements.

<fetch>
  <entity name="account">
    <attribute name="name" />
    <attribute name="telephone1" />
    <link-entity name="contact" to="primarycontactid" from="contactid" link-type="inner">
      <attribute name="fullname" />
      <attribute name="telephone1" />
    </link-entity>
  </entity>
</fetch>

This will return the name and telephone number of accounts, and the name and number of the related primary contact.

Including all attributes

Because nothing is free, you should always include only the attributes you need in the result. It’s easy to include everything, just in case, but your queries will run noticeably slower.

If you really do want to include all the attributes, you can use the <all-attributes> element instead of lots of individual <attribute> elements to make your query easier to read. This will include the attributes from the <entity> or <link-entity> you add it to, not all the different entities in your query. If you have a query that is joining two entities and want all the attributes from both entities you’ll need an <all-attributes> element within the top-level <entity> and the <link-entity>

<fetch>
  <entity name="account">
    <attribute name="name" />
    <attribute name="telephone1" />
    <link-entity name="contact" to="primarycontactid" from="contactid" link-type="inner">
      <all-attributes />
    </link-entity>
  </entity>
</fetch>

This will return the name and telephone number of accounts, and all attributes of the related primary contact.

Aliasing columns

The results returned by a FetchXML query will return a column for each attribute. By default, those columns have the same name as the attribute. For example, the first query would produce an output like:

nametelephone1contact.fullnamecontact.telephone1
Data8 Ltd0151 355 4555Mark Carrington+44 151 355 4555

You can choose to change these names using aliases. This becomes more useful when we look at aggregate queries, but you can still use them for simple queries too.

<fetch>
  <entity name="account">
    <attribute name="name" alias="account" />
    <attribute name="telephone1" alias="phone" />
    <link-entity name="contact" to="primarycontactid" from="contactid" link-type="inner">
      <attribute name="fullname" alias="name" />
      <attribute name="telephone1" alias="tel" />
    </link-entity>
  </entity>
</fetch>
accountphonenametel
Data8 Ltd0151 355 4555Mark Carrington+44 151 355 4555

Without the aliases, the attributes retrieved from the linked entity were prefixed with the name of the entity. When you use an alias that prefix isn’t used, so you need to make sure that the alias names are unique across your query, not just within the individual entity.

Although you can apply aliases in this way, I wouldn’t recommend it simply because it makes it less obvious where the data has come from. When you execute the query the value will be wrapped in an AliasedValue object which gives this information, but why make life harder?

Aliases also don’t work well with the deprecated ExecuteFetchRequest message. If you’re using that, please look at moving to RetrieveMultiple instead. If you can’t for any reason you’ll find that aliased columns generally don’t appear in your results. The only time I’ve got them to appear is when the alias name matches another attribute name in the query, which is pretty useless. If you run a query in FetchXML Builder with the “Raw fetch result” view it uses this method, so don’t be surprised if you don’t get the expected results in there.

SQL equivalents

In SQL 4 CDS, attributes are included using the standard SELECT clause in your query. For example, the first query would be written in SQL as:

SELECT a.name,
       a.telephone1,
       c.fullname,
       c.telephone1
FROM   account AS a
       INNER JOIN
       contact AS c
       ON a.primarycontactid = c.contactid;

Aliases can be added using the AS keyword, e.g.:

SELECT a.name AS account,
       a.telephone1 AS phone,
       c.fullname AS name,
       c.telephone1 AS tel
FROM   account AS a
       INNER JOIN
       contact AS c
       ON a.primarycontactid = c.contactid;

All attributes can be included using *:

SELECT *
FROM   account AS a
       INNER JOIN
       contact AS c
       ON a.primarycontactid = c.contactid;

You can mix and match selecting individual attributes from some entities and all attributes from others in a join as well:

SELECT a.*,
       c.fullname
FROM   account AS a
       INNER JOIN
       contact AS c
       ON a.primarycontactid = c.contactid;

Although SQL can support much more complex SELECT clauses, performing calculations and calling functions, these are not supported in FetchXML and therefore in SQL 4 CDS. If you try to write a query in SQL 4 CDS such as:

SELECT 1 + 1
FROM   account;

SELECT dateadd(d, createdon, 1)
FROM   account;

you’ll get an error from SQL 4 CDS:

Unhandled SELECT clause: 1 + 1
Unhandled function: dateadd(d, createdon, 1)

Next time I’ll take a closer look at the <link-entity> / JOIN syntax for linking multiple entities.

SQL 4 CDS Update!

Firstly, thank you to everyone who’s shown an interest in my SQL 4 CDS XrmToolBox tool! I’ve been busy working on the difficult second release, and I’m pleased to announce version 1.0.3 is in the new Tool Library today!

If you’re using it, please give it a rating while you’re there 🙏

What’s new?

The two main focus areas of this update are extended SQL compatibility and integration with FetchXML Builder.

Updated FetchXML view

Each FetchXML view now has an “Edit in FetchXML Builder” button at the top which allows you to switch to everyone’s favourite FetchXML editor and make use of all the power there of editing the query further or converting it into various other formats.

The SQL to FetchXML conversion now also understands a wider range of queries, including:

  • Additional filter criteria on joins
  • Improved aggregate support
  • Wider support for aliases

What’s next?

For the core query language of CDS, FetchXML seems poorly documented and understood. Many things I’ve picked up from old blogs or simply trial and error.

I’ve learned a lot about FetchXML while writing this tool, so look out for a mini-series on some of the SQL 4 CDS features and the weird and wonderful things that have driven me nuts over the past few months!

Query CDS with SQL!

If you’re anything like me you probably find querying the data in CDS a pain, either using the Advanced Find interface or writing FetchXml. Tools such as FetchXMLBuilder help a lot, but I still think about the query I want to write in good-old SQL and then have to translate it into FetchXML.

To make my life easier I created a tool to do the translation for me, and now I’ve released it as an XmlToolBox tool for you to use too – SQL 4 CDS!

Write and execute CDS queries as SQL

Write your query as SQL and either execute it straight away or see it converted to FetchXml for you to use elsewhere.

Please give it a go and let me know any feedback!

MSDyn365 Internals: Merge Permissions

The ability to merge records is a powerful one in keeping control of your data quality within PowerApps / D365. It can also be deceptively complex. As we’ve been living & breathing this area of the platform for several years during the development of data8 duplicare, we’ve seen a lot of unexpected things show up when merging particular records. To help understand why you might get an error when merging, I’ve tried to “deconstruct” the merge process below.

Standard Microsoft Dynamics 365 Merge screen

Basic Validation

There’s a whole bunch of conditions that must be met before two records can be merged. If these aren’t met, you should get an error message back pretty quickly. Most of these errors are quite self-explanatory, but there are some that took a little more digging to get to the bottom of.

  1. You can’t merge a record with itself. Kind of goes without saying, but we had to check! If you try and merge a record with itself you’ll get the error Merge cannot be performed on master and sub-entities that are identical.
  2. You can only merge accounts, contacts, leads and cases. Try to merge anything else and you’ll get the error This type: {name} is not supported with merge operation
  3. Both the master and subordinate records must be active. If you try to merge an inactive record you’ll get one of the following messages:
    • master entity:{name}-{id} is deactive
    • sub-entity:{name}-{id} is deactive
  4. Security checks. You need several different security privileges to be able to merge:
PrivilegeRequired On
ReadMaster Record
WriteMaster & Subordinate Records
ShareMaster Record
Append ToMaster Record
MergeGlobal

If you’re missing any of these you’ll get the rather ambiguous message “Merge is not allowed: caller does not have the privilege or access”. You can use a tool use as the XrmToolBox Access Checker plugin to find out exactly which privilege you are missing. If you’re on-prem then you can take a look at the server trace logs to get more details.

Access Checker plugin in XrmToolBox

Advanced Validation

Now we’ve got the basic sorted, there are some more detailed validation checks that the system will run depending on the type of records being merged:

  1. You can’t merge cases with different parents. If you try you’ll get the error “Child cases having different parent case can not be merged.”
  2. You can’t merge accounts or contacts with different parents unless you specify the PerformParentingChecks parameter as false. Otherwise you’ll get one of the following messages:
    • Merge warning: sub-entity might lose parenting (if the subordinate record has a parent but the master doesn’t)
    • Merge warning: sub-entity will be differently parented. (if the master and subordinate records both have different parents)
  3. You can’t merge cases that have more than 100 child cases between them. There is a system-defined limit of 100 child cases per parent case, so if you try to merge two cases that each have 60 child cases, that would result in the final merged case having 120 children, which isn’t allowed. This triggers the error message A Parent Case cannot have more than 100 child cases. Contact your administrator for more details
  4. You can’t merge accounts or contacts that have an active quote. Specifically, the subordinate record can’t have an active quote, but the master record can. If you try to merge two records where the subordinate has an active quote you’ll get the error Merge cannot be performed on sub-entity that has active quote.
  5. You can’t merge records if that would create a loop in a hierarchy. For example, if you have a hierarchy of accounts A <- B <- C, and try to merge A and C. That could result in the parent of A being B (copying the parent from C), and the parent of B being A. If you end up in this situation you’ll get one of the following errors:
    • Merge could create cyclical parenting. (account)
    • Loop exists in the contacts hierarchy. (contact)
    • Creating this parental association would create a loop in Contacts hierarchy. (contact)

Move Related Records

Now it’s done all the up-front checks, it’s time to move onto the interesting bit – moving the records that were related to the subordinate record over to the master.

At this point the system goes through all 1:N and N:N relationships with a Merge cascade type of Cascade All (note that you can’t change this cascade type – it will typically be Cascade All except for a few system-managed relationships).

All records related to the subordinate record through one of these relationships will be updated to refer to the master record. This requires Append and Write privileges on the related record. If you don’t have those privileges you’ll get an error like SecLib::AccessCheckEx2 failed. Entity Name:activitypointer,OwnershipTypeMask:UserOwned, ObjectId: 98dbe456-43b3-e711-80e6-3863bb35af60, OwnerId:843ef80e-6a51-e511-9759-f0921c100524, OwnerIdType:8, OwnerData: roleCount=2, privilegeCount=2993, accessMode=0 and CallingUser:100781f5-c4d6-e311-aec5-d89d6764507c, CallerBusinessId:0c92636a-d920-e511-b427-d89d67632c70 PrincipalData: roleCount=5, privilegeCount=1141, accessMode=0. ObjectTypeCode:4200, ObjectBusinessUnitId:0c92636a-d920-e511-b427-d89d67632c70, AccessRights: WriteAccess. Computed rightsToCheck=WriteAccess, grantedRights=8, hsmGrantedRights=None, grantedRightsWithHsm=8,

The highlighted parts help you identify the type of record and the ID of the individual record that the user is missing permissions on, and the permission that is missing.

It will also update any activities that have the subordinate record as a party (sender, to, organizer etc.) to use the master record instead, even though the Activity Party relationships have a Merge cascade type of Cascade None.

Each of these updates can in turn trigger workflows or plugins that can have any number of other side effects, so it’s quite common to see other errors here. It can be difficult to debug these – the simplest way beyond trial and error is to get access to the trace logs which should tell you what entity type and record ID is triggering the error.

If you find that merging is taking a long time, it’s normally this step that’s the problem. Check your foreign key indexes, synchronous plugins and workflows.

Sharing

The next non-obvious part of the process is that any related records that have been moved from the subordinate to the master will now be shared with the owner of the master record. For example, if you merge two accounts, the contacts that were associated with the subordinate account will be shared with the owner of the master account. This tends to make things “just work” as most users would probably expect, but may break your security model. If you want to disable this behaviour you can do so by changing the GrantFullAccessForMergeToMasterOwner organisation setting to False.

Similarly, the master record will be shared with the owner of the subordinate record. To turn this off, use the GrantSharedAccessForMergeToSubordinateOwner organisation setting.

Finally, reparenting rules are applied. Any 1:N relationships with a Reparent cascade rule will be processed to share the related records with the owner of the master record.

Updating the Master

Any updates to the master record are applied, e.g. if you want to take data from the subordinate record that is missing or incorrect on the master.

In the merging UI in the web application you can only select values to update that are in the same field on each record, i.e. use Email Address 1 from either the master or subordinate. Using the SDK however, there’s nothing to stop you taking values from elsewhere, or even just making up entirely new values that don’t exist in either record, by populating the UpdateContent parameter of the MergeRequest.

Deactivating the Subordinate

The subordinate record is linked to master record and deactivated. The state code and status code of the subordinate will be set as follows:

Entity TypeState CodeStatus Code
Account1Default
Contact1Default
Lead2Default
Case22000

This can trigger an error when restricted state transitions have been set up for cases: “The merge couldn’t be performed. One or more of the selected cases couldn’t be cancelled because of the status transition rules that are defined for cases.”

The default status code for a state code can be selected within the attribute customization screen, currently this can only be done through the classic UI:

Version 9 Changes

Merging has stayed essentially unchanged since at least version 4, but version 9 introduced a few new standard plugins that attach to the Merge message and can trigger some different error messages.

In the Move Related Records section above, we saw the rather unhelpful error message that was returned when the user doesn’t have permissions to move one of the related records from the subordinate to the master record. Version 9 introduces a new plugin Microsoft.Dynamics.Sales.Plugins.PreOperationAccountMerge, and while it’s not immediately clear what this plugin does, it does generate a more helpful version of the same error:

Principal user (Id= 
100781f5-c4d6-e311-aec5-d89d6764507c , type=8) is missing prvReadCustomerOpportunityRole privilege (Id=db84e2ea-44fe-44ff-b01c-bd1b3d3d07ae)

Rather less helpfully, it also introduces plugins for the Microsoft Dynamics for Marketing product. One of these, Microsoft.Dynamics.Marketing.Plugins.PreDisassociateEntitiesPlugin, will commonly trigger errors when the subordinate record being merged is a member of a marketing list. If you receive an error such as:

Disassociation cannot be performed between an entity of type list and an entity of type account.; [Microsoft.Dynamics.Marketing.Plugins: Microsoft.Dynamics.Marketing.Plugins.PreDisassociateEntitiesPlugin]

then you have hit the bug in this plugin. There isn’t anything wrong with your process, code or data, but to work around the bug you’ll need to first remove the subordinate record from any marketing lists then merge the records again. If you want to preserve the marketing list membership, you’ll need to manually add the master record to the same marketing lists as the subordinate record was part of.

MSDyn365 Internals: Plugin Execution Depth and v9

If you’ve been working with plugins in Microsoft Dynamics 365 for a while, you’ve probably come across the IExecutionContext.Depth property. This tells your plugin how deep in a call stack of plugins it is. A depth of 1 means it is being triggered by a direct user action, e.g. updating a record. If that plugin does something that triggers another plugin, that plugin will have a depth of 2, and so on.

Preventing Infinite Loops

Because plugins can do operations that trigger other plugins, there is a danger that poorly written code can end up in an infinite loop of one plugin calling another, which calls the first plugin again, which calls the second one, and so on until all your server resources are used and it dies, giving you some very unhappy users.

To prevent this, the Microsoft Dynamics 365 platform enforces a limit on the plugin depth. By default, this is set to 8. This means that when the 8th plugin is fired, the platform will kill that entire stack with the error message:

This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow logic, see Help.

New features in v9

Version 9 of Microsoft Dynamics 365 for Sales brings many new features. At the same time, the underlying Power Platform has been separated from the 1st party apps such as Sales. Those new features are now being implemented in a similar way to 3rd party customizations, including using plugins.

For example, the new Action Card functionality uses plugins attached to the Create and Update messages on contacts.

"No Activity" action card

Errors after upgrading

If you were close to the maximum plugin depth limit before, you might start seeing some of these “infinite loop” errors after upgrading. This isn’t because of any new problem in your code, but because Microsoft’s own plugins are using up some of the available plugin depth.

We had an example of this today – a plugin that synchronizes a hierarchy of entities coming from an ERP system to a deduplicated hierarchy of account and contact entities started failing. Turning on plugin trace logging gave us the answer:

System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault]: This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow logic, see Help. (Fault Detail is equal to Exception details: 
ErrorCode: 0x80044182
Message: This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow logic, see Help.; [Microsoft.Dynamics.AppCommon.Plugins: Microsoft.Dynamics.AppCommon.Plugins.PostOperationHandleContactActionCard]
[7cb6ad1b-7e8d-4955-a2a9-694bef2c84dc: Create of acton card for contact]
Entered Microsoft.Dynamics.AppCommon.Plugins.PostOperationHandleContactActionCard.Execute(), Correlation Id: f44c7c54-f16d-408f-903e-e386f22aadc3, Initiating User: c1cab4c8-f553-432c-8430-10262d185543
Exception: System.ServiceModel.FaultException`1[Microsoft.Xrm.Sdk.OrganizationServiceFault]: This workflow job was canceled because the workflow that started it included an infinite loop. Correct the workflow logic and try again. For information about workflow logic, see Help. (Fault Detail is equal to Exception details: 
ErrorCode: 0x80044182
Message: This workflow job was canceled bec...).

The error is coming from the new action card plugin that was introduced in v9. The other information in the plugin trace log lets us see a series of plugin executions that we would normally expect, getting up to depth 7, before this error is eventually triggered.

Fixing the problem

Luckily this solution is on-premise, so we have the option of increasing the maximum plugin depth as a quick fix:

PS> $set = Get-CrmSetting -SettingType WorkflowSettings
PS> $set.MaxDepth = 10
PS> Set-CrmSetting -Setting $set

This got us working again quickly, but isn’t an option when we look forward to moving to Online.

As a longer-term solution we’ll need to look at refactoring the plugin to do more of the work in each plugin step so it doesn’t need to recurse so much.

Looking to the future

This is a really good example of the “brave new world” for Microsoft Dynamics 365. A world in which the 1st party apps such as Sales are truly built as apps on the Power Platform with the same options available to other customizers and ISVs. One of the powerful features of CRM was always its extensibility model. With this shift in approach I think we can only see this extensibility story getting even more compelling. We just have to be mindful that, even in environments with no other outside customizations, we still have Microsoft “customizations” in the system as well.

Blood Glucose Alerts with Microsoft Flow and Nightscout

As my son has Type 1 Diabetes (T1D), we use Nightscout to keep track of his blood glucose levels. With T1D it’s important to keep those levels in quite a tight range, so we use a few different apps to alert us when it’s going out of range. As I’ve started looking into Microsoft Flow, I thought it would be a good scenario to work through – take the readings that are stored in Nightscout and generate alerts on my phone when something’s going wrong.

Getting Started with Microsoft Flow

I did look to see if I could get Nightscout to trigger my Flow as it was needed. It has an IFTTT connector, but not Microsoft Flow, so to start with I built a flow using a Recurrence trigger to poll Nightscout every few minutes to get the latest data. This would cause us a problem in the real world as running it every 5 minutes would use 30 x 24 x 60 / 5 = 8,640 runs per month. Looking at the licensing page today, that would require a P2 license costing $15 per month, which isn’t ideal. Checking every 10 minutes brings us into the realm of a P1 license for $5 per month. Even so, it looks like a better long-term option is to get Nightscout to work out if the flow needs to run and use an HTTP trigger instead.

Extracting the Data from Nightscout

I used the “HTTP” action to get the latest 2 SGV readings from the Nightscout API

Fetch data from Nightscout

Note that I’ve used the Queries setting to specify I want to get 2 readings back. The Nightscout API will automatically apply a reverse date ordering for me, so these will be the most recent 2 without me having to add my own sort order.

Parsing the Data

The previous step gets me a block of text like:

[
  {
    "_id": "5cab4051dcbac10d8021a4d0",
    "device": "AndroidAPS-DexcomG5",
    "date": 1554726975000,
    "dateString": "2019-04-08T12:36:15Z",
    "sgv": 126,
    "direction": "Flat",
    "type": "sgv",
    "NSCLIENT_ID": 1554727079041,
    "created_at": "2019-04-08T12:36:33.713Z"
  },
  {
    "_id": "5cab3f22dcbac10d8021a4cd",
    "device": "AndroidAPS-DexcomG5",
    "date": 1554726675000,
    "dateString": "2019-04-08T12:31:15Z",
    "sgv": 130,
    "direction": "Flat",
    "type": "sgv",
    "NSCLIENT_ID": 1554726775551,
    "created_at": "2019-04-08T12:31:30.234Z"
  }
]

This isn’t very usable, so the first thing is to parse it using the “Parse JSON” action. Although Nightscout is supposed to expose a Swagger endpoint I couldn’t get it to work, so I used the option to generate a schema based on a sample payload. This tells the action what structure to expect to see in the JSON text, and therefore what data subsequent steps in the flow will have.

Parsing JSON

The one critical part of the data we have got is the sgv value – this tells us the “skin glucose value” measured by the CGM device my son is wearing. However, this value is reported in units of mg/dL, while in the UK we tend to work in units of mmol/L. The next step I added then is to create a simple data set of two mmol/L values using the “Select” action. This process should be familiar to anyone used to working with databases – it transforms each row in the input to a new row in the output by applying a series of mappings. In my case I created a new data set with one column called mmol.

Converting mg/dL to mmol/L

The calculation used to do the mapping is simply to divide the mg/dL value by 18:

Converting mg/dL to mmol/L

Because Flow expressions are based entirely around functions, I had to use div(x, y) instead of x / y. I find this a difficult habit to get into, and caused me lots of frustrating “The expression is invalid” errors! The other thing to note here is how I used 18.0 instead of simply 18. This should be familiar to anyone from a development background – dividing integer values will give an integer result, but dividing floating point values will give a floating point result. So div(5, 2) = 2, but div(5 / 2.0) = 2.5

Using Variables

Now I’ve got all the data I need, in the format I need it. However, I’ve got it all in one table, but what I really need is two separate values. I need the latest value and the previous value separately in order to compare them later. To save me having to write some repetitive expressions later to keep extracting the right one, I used a couple of variables to extract them once and store them for later reuse using the “Initialize Variable” action.

The expression I used to get the latest value was first(body('Convert_SGV_to_mmol'))['mmol']. Because I gave my previous steps helpful names instead of the default “HTTP”, “Parse JSON”, “Select” etc., I now get a much easier to read and understand expression.

After repeating this with another “Initialize Variable” action and changing first to last to get the previous reading, I added one final variable to format the latest reading so a human can read it. Because we have been dealing with floating point numbers, they can appear as helpful things like 7.19999999999999 instead of 7.2, so I want to do a bit of tidying up of that value before anyone is going to see it. Unfortunately Flow doesn’t expose any functions to control this formatting itself, and firing it out to an Azure Function or similar seems like overkill, so I just used an expression to trim the string after the first decimal place: substring(string(variables('Latest SGV')), 0, add(indexOf(string(variables('Latest SGV')), '.'), 2))

Triggering Alerts

I’ve got all the data I need to hand and nicely formatted, so now I need to fire some alerts!

The basic criteria I wanted to try were:

  1. Rising over 7.0 mmol/L
  2. Falling below 5.0 mmol/L

We can work out whether the data matches these conditions by:

  1. Comparing the latest and previous values to check if the values are rising and falling
  2. Check if the latest value is above 7.0 and the previous value is below (or equal to) 7.0. This checks if the level has just gone over the boundary, rather than having passed it some time ago
Check for BG dropping below a threshold

If this condition is matched, I want to trigger a mobile notification. In the message of the notification I can use the formatted value stored in a variable earlier:

Send mobile notification

Once the notification has been sent, I used the “Terminate” action to stop the flow. Following the advice on Jonas Rapp’s blog, this helps stop the problem of exponentially-nesting conditional statements and makes the flow much easier to read.

First Thoughts on Flow

This was an interesting first use of Flow for me, and helped me start making sense of the platform. It’s probably not really a great use for Flow in the end though, for a few reasons:

  1. As there’s no appropriate Microsoft Flow trigger in Nightscout at the moment, it needs to poll regularly to get real time alerts
  2. The regular polling and use of HTTP action needs access to premium licensing
  3. There are already various other apps that give alerts based on the sort of criteria I’ve implemented here

I was also looking at Flow on the basis of it being a “citizen developer” way of producing bespoke alerts. I’m not sure I saw that aspect of the system through this process however. Although I wrote substantially less code than I might otherwise have done, it still required a lot of developer-y knowledge:

  • How to access data from a REST API
  • An understanding of JSON and the concept of parsing
  • Data manipulation concepts
  • Integer vs. floating point calculations
  • String manipulation functions

That’s not to say that you need a degree in computer science to build this, but it does seem rather more complex than the hype might suggest. Maybe I just picked the wrong example to see the citizen developer experience at its fullest.

What’s Next?

I’ll hopefully have plenty more opportunities to play with Flow in a business context as we look at moving our D365 implementation to Online. As for looking at Nightscout data, I’m next going to have a look at using Azure Stream Analytics. Hopefully that’ll make it possible to perform the sort of queries I’d like against the data. In addition, I’ll be attending D365UG Manchester tomorrow to start learning about Power BI with Matt Collins, which I hope to put into practise with this data.

MSDyn365 Internals: QualifyLead

One thing I love digging into with Microsoft Dynamics 365 is all the “special” actions. Although just about any entity type can be used with Create, Update, Retrieve etc., there are a lot of other actions that do more specialised jobs, and QualifyLead is a great one of these.

QualifyLead entities

The standard case of qualifying a lead sets up the entities shown in the diagram above:

  1. Account record is created
  2. Contact record is created and linked to the new account
  3. Opportunity record is created and linked to the opportunity
  4. The lead is updated with references to the new account and contact in the parentaccountid and parentcustomerid fields
  5. The lead status is changed to Qualified

Mappings

You can see details of how the system copies information from fields in the lead to those in the new account/contact/opportunity by using the Mappings configuration on the relationship between the lead and the other entity. For example, go to Settings > Customization, click Customize the System, expand the Lead entity and go to 1:N Relationships, double-click on the opportunity_originating_lead relationship, then click on Mappings in the left hand menu. Here you can select fields from the lead entity and the corresponding field in the opportunity entity. When the lead is qualified the system uses these mappings to copy information from the lead to the opportunity. You can repeat this process to change the mappings to the account and contact entities too.

Relationship Field Mappings

Parameters

From a developer perspective you can qualify a lead by using the QualifyLead action. There are two main documentation pages for it which are slightly contradictory. The QualifyLeadRequest page from the SDK documentation describes what each parameter is for, while the QualifyLead WebAPI action documentation does a better job at identifying which parameters are really required or not. Both of these sources list the main parameters:

  • CreateAccount
  • CreateContact
  • CreateOpportunity

As you might expect, you can set these to true or false to control whether or not the system will create each of those additional entities. If you don’t want to create an opportunity when qualifying your lead, simply set CreateOpportunity to false.

Four more optional parameters control some of the details of the opportunity that the system creates:

  • OpportunityCurrencyId
  • OpportunityCustomerId
  • SourceCampaignId
  • ProcessInstanceId

Once the lead is qualified, the statecode is changed to Qualified, and there is another parameter that controls what the statuscode is changed to:

  • Status

More Parameters!

So far, so good. However, there are some (as far as I’ve found) undocumented details to the behaviour of this action. Because they’re undocumented I’d assume they’re liable to change without notice, though I’d be surprised if they did:

  • Duplicate Detection. If creating the new account/contact/opportunity would create a duplicate that is identified by a duplicate detection rule, qualifying the lead will fail. You can avoid this if necessary by setting an additional parameter SuppressDuplicateDetection to true
  • Existing account & contact. If the parentaccountid or parentcontactid fields are set on the lead before qualifying, regardless of whether the CreateAccount and CreateContact parameters are set to true, no new account or contact will be created. The system won’t make any changes to the existing account & contact by mapping across data from the lead into the existing records – it ignores those mappings entirely.
  • Blank company name. If the companyname field on the lead is blank, regardless of whether the CreateAccount parameter is set to true, no new account will be created. Note that this doesn’t seem to be affected by what the mappings are between the lead and account entities – even if you map telephone1 on the lead to name on the account, it’s still the companyname field on the lead that controls this behaviour. If the lead contains data that only maps to the account record, you will effectively lose that data
  • Blank contact name. Similarly, if both the firstname and lastname fields on the lead are blank, no contact will every be created, regardless of the mappings from lead to contact or what the CreateContact parameter is set to.

Permissions

Because QualifyLead can touch 4 different entities, there is plenty of scope for it to fail with security errors. You might need to be able to create the account, contact and opportunity. Or if you’re using existing an account & contact you’ll need the AppendTo privilege on those entity types. If you’re missing a required permission you won’t get a nice helpful error, just get the standard “Access Is Denied”. Download the log file to get detailed information on what privilege is missing, but it’s not easy to parse:

<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"><s:Body><s:Fault><faultcode>s:Client</faultcode><faultstring xml:lang="en-GB">SecLib::AccessCheckEx failed. Returned hr = -2147187962, ObjectID: 0b80cd6c-4114-e411-b9de-00155d00b203, OwnerId: cca04df7-0b02-e411-b9de-00155d00b203,  OwnerIdType: 8 and CallingUser: 40097a3c-4b91-e611-80c5-00155d007101. ObjectTypeCode: 1, objectBusinessUnitId: d32d38d8-e501-e411-b9de-00155d00b203, AccessRights: AppendToAccess </faultstring><detail><OrganizationServiceFault xmlns="http://schemas.microsoft.com/xrm/2011/Contracts" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"><ActivityId>dc3f52ee-8fbb-4b73-a79e-f1bf01ccd3bc</ActivityId><ErrorCode>-2147187962</ErrorCode><ErrorDetails xmlns:a="http://schemas.datacontract.org/2004/07/System.Collections.Generic"/><Message>SecLib::AccessCheckEx failed. Returned hr = -2147187962, ObjectID: 0b80cd6c-4114-e411-b9de-00155d00b203, OwnerId: cca04df7-0b02-e411-b9de-00155d00b203,  OwnerIdType: 8 and CallingUser: 40097a3c-4b91-e611-80c5-00155d007101. ObjectTypeCode: 1, objectBusinessUnitId: d32d38d8-e501-e411-b9de-00155d00b203, AccessRights: AppendToAccess </Message><Timestamp>2019-03-20T16:30:23.176178Z</Timestamp><ExceptionRetriable>false</ExceptionRetriable><ExceptionSource i:nil="true"/><InnerFault><ActivityId>dc3f52ee-8fbb-4b73-a79e-f1bf01ccd3bc</ActivityId><ErrorCode>-2147187962</ErrorCode><ErrorDetails xmlns:a="http://schemas.datacontract.org/2004/07/System.Collections.Generic"/><Message>SecLib::AccessCheckEx failed. Returned hr = -2147187962, ObjectID: 0b80cd6c-4114-e411-b9de-00155d00b203, OwnerId: cca04df7-0b02-e411-b9de-00155d00b203,  OwnerIdType: 8 and CallingUser: 40097a3c-4b91-e611-80c5-00155d007101. ObjectTypeCode: 1, objectBusinessUnitId: d32d38d8-e501-e411-b9de-00155d00b203, AccessRights: AppendToAccess </Message><Timestamp>2019-03-20T16:30:23.176178Z</Timestamp><ExceptionRetriable>false</ExceptionRetriable><ExceptionSource i:nil="true"/><InnerFault i:nil="true"/><OriginalException i:nil="true"/><TraceText i:nil="true"/></InnerFault><OriginalException i:nil="true"/><TraceText i:nil="true"/></OrganizationServiceFault></detail></s:Fault></s:Body></s:Envelope>

Not too helpful at first glance, but all the information you need is in there:

  • ObjectID: the GUID of the record that you don’t have a required privilege on
  • ObjectTypeCode: the type of the record. Unfortunately the error message presents this as an object type code rather than logical name, which I find much easier to work with. You can use tools such as Metadata Browser in XrmToolBox to find this, or there are websites that list the type codes for standard entities
  • OwnerID: the GUID of the user or team that owns that object
  • OwnerIdType: the type of record that owns the object. This will be either 8 for a user or 9 for a team
  • CallingUser: the GUID of the user that is making the request and missing the privilege
  • AccessRights: the exact type of privilege that the user is missing on the record

Put this information together and you can work out either the additional security role the user might need, or the change to the existing security role to implement. Of course, you might then get what looks like the same error again but with subtly different values which will point you to the next privilege you need to add.

Handling RetrieveMultiple Paging

When you use the RetrieveMultiple method to query data from D365 CE, it’s easy to take the results and assume that’s everything, especially when you’re working with small test data sets.

However, to stop a rogue query killing the server while it retrieves millions of records, data is retrieved in pages, by default pulling back 5,000 records at a time. I think it’s because this is quite a lot of records for a simple dev system that it’s easy to overlook the requirement to move onto subsequent pages.

Part of each request identifies which page number you want to retrieve, and how many records to retrieve at a time, as in the following example:

var qry = new QueryExpression("account")
{
  PageInfo = new PagingInfo
  {
    PageNumber = 1,
    Count = 500
  }
};

The first page is number 1, not zero. 🤓 It’s also worth noting that the maximum page size is 5,000 records. You can ask for bigger pages and you won’t get an error, but you will only get 5,000 records back regardless.

If you need to go straight to a specific page you can just specify a higher page number, but if you’re moving from one page to the next it’s more efficient if you also use the PagingCookie option:

while (true)
{
  var results = org.RetrieveMultiple(qry);

  // TODO: Do something with the results
  
  if (!results.MoreRecords)
    break;

  qry.PageInfo.PageNumber++;
  qry.PageInfo.PagingCookie = results.PagingCookie;
}

I’ve always just taken it as gospel from the documentation that using the PagingCookie is The Right Way™, but what difference does it actually make?

To check, I ran a quick test application to retrieve every email record in a test system (275,000 in total). With the paging cookie took 1:31, and without took 4:29, so definitely worthwhile doing one extra line of code for a 3x speed improvement!

If you’re displaying data to a user then you probably want to retrieve a single page and give the user the options to move through them using code such as this. Most of the code I write however interacts with data as a background operation and will always need to process all the records matched by a query, so instead of using RetrieveMultiple directly and having an outer loop to move between pages and an inner loop to process the records in a single page, I use a simple extension method to make things easier:

var qry = new QueryExpression("account");

// TODO: Set up the criteria etc. of the query

foreach (var account in org.RetrieveAll(qry))
{
  // TODO: Do something with the results
}

This makes it much clearer what the code is actually doing and hides the details of how the paging is happening in the background.

The RetrieveAll method is implemented as follows:

static class Extensions
{
  public static IEnumerable<Entity> RetrieveAll(this IOrganizationService org, QueryExpression qry)
  {
    // Set up paging
    qry.PageInfo = new PagingInfo
    {
      PageNumber = 1,
      Count = 500
    };

    while (true)
    {
      var results = org.RetrieveMultiple(qry);

      foreach (var entity in results.Entities)
        yield return entity;

      if (!results.MoreRecords)
        break;

      qry.PageInfo.PageNumber++;
      qry.PageInfo.PagingCookie = results.PagingCookie;
    }
  }
}

Hope this helps you query your D365 CE data a little bit easier!