Managing Connections in XrmToolBox

If you’re a Microsoft Dynamics 365 (hereafter called CRM, because I’m too old to change) admin or developer, you’ve probably used XrmToolBox. If not, go and check it out now. I’ll wait.

Now, the first thing you’ll do when you start using XrmToolBox is set up a connection to your CRM instance. If you work at a partner organisation, you might end up adding connections to lots of different instances for all your customers. How do you manage this efficiently and securely across your team?

Although I’ve been using XrmToolBox for several years, one feature I’ve never appreciated before is the option to have multiple lists of connections.

XrmToolBox Connections Manager

By using this feature you can have a shared file containing the connection details that everyone else in your team can reference, giving everyone a consistent experience, making it quicker to get new team members up to speed and making sure that everyone immediately has access to any new environment that gets added.

Connection Groups

Once a new file of connections has been added it will show up as a separate group of connections within the status bar at the bottom of XrmToolBox ready for you to connect to any of the environments in the new list.

I recently added a change to the MscrmTools.Xrm.Connection package that XrmToolBox uses that allows these connection lists to be loaded from an HTTP URL as well as a file, and I was very pleased to see this included in the recent 1.2019.2.32 release of XrmToolBox so everyone can benefit from this.

The important difference this makes is that you can generate a connection list dynamically based on whatever criteria you want to implement. As an example, at Data8 we have the option for people to register the details of their CRM environments on our website and we can now use a simple internal site to generate a connection file appropriate for the user that’s requesting it, making sure each of our analysts can immediately access the connections that are relevant to them.

To generate a connection list dynamically you need to dive a little into the underlying XML format of the connection list. Assuming that CRM developers are probably also .NET developers I’ve put together a few simple C# classes that corresponds to the format that XrmToolBox expects, and an example ASP.NET MVC action that generates an XML document using these classes with the XmlSerializer class.

Now that we’ve got those helper classes in place we can add a controller to an ASP.NET MVC app to generate a connection list XML document that XrmToolBox can use:

Of course, the details of how to actually populate the list are up to you – read them from a database, filter them based on the current user, at this point the world is your oyster!

Once you’ve got your website written, deploy it to a local web server. XrmToolBox will use Windows authentication where necessary, so you can use that to easily lock down access to your site.

Finally, add the URL to XrmToolBox. Go to the Connection Manager screen, select <Add an existing connection file>, and enter the URL of your internal site. You should see the list of connections available straight away in XrmToolBox.

A few limitations of using a URL for a connection list instead of a file:

  • If you change the list, it won’t be automatically reloaded until you restart XrmToolBox
  • You can’t edit or remove connections from the list within XrmToolBox
  • There is no tracking of most recently used connections

Diagnosing Azure Batch Pool Scaling Timeout

As I wrote about recently, I’ve started using Azure Batch to run OpenAPS Autotune jobs for AutotuneWeb. The other day however, I started a job of my own and got a notification that my job was 48th in the queue. Either the service has suddenly got really popular, or something’s gone wrong.

A quick look at my Azure Portal showed me that the Azure Batch pool was stuck trying to automatically scale up to 20 VMs, but it was getting a timeout error:

Pool resize timeout error

I initially wondered if it was a problem due to trying to go straight from 0 to 20 nodes, so I tried a few alternative scale formulas to just use a single dedicated node instead, but I still got the same problem.

I haven’t found a way to get detailed debugging information out at this point, but I wondered if there was some problem with the VM image I was using. The previous night I’d tidied up various unused resources from my previous implementations, so I thought I might have deleted something important.

Next I tried creating a single VM manually from the same image that the pool was using. That worked fine, so then I created a new image from that VM and created a new pool using the new image. The new pool using that image could scale just fine, so I tried creating a third pool using the original image. That one had the same problem, so there was definitely something up with the original VM image I was using.

To get things working, I moved all the pending jobs to use the pool with the new image while I tried to figure out what was up with the original one.

To figure it out without breaking the live system again, I created another new VM called testvm with the same image, then captured the image of it as testvm-image. With the VM now deleted there are still a bunch of resources left lying around, which are what I’d tried to tidy up the previous night:

VM Resources

Although the virtual machine itself has been deleted, there are still 4 resources left apparently unused:

  • Public IP address
  • Network security group
  • Network interface
  • Disk

So in the best traditions of trial and error I created a new Azure Batch pool using the new image, then tried deleting each of these resources in turn and checking if the pool could still resize afterwards.

Deleting the public IP address resource first, because it was at the top of the list, gives an error because it is linked to the network interface resource. Makes sense, so I deleted the network interface instead. Everything still worked fine.

I then deleted first the public IP address, then the network security group. Both times, the pool still resized correctly, so surely it was the disk resource that would trigger the problem…

Failed Pool

Yes! Now I deleted the disk resource from my imaged VM, I can no longer resize the pool.

Now I’m not sure quite what is causing the problem here. The image itself is still usable for creating VMs individually, apparently just not through Azure Batch. Presumably all the data required to create a VM is therefore wrapped up in the Image resource itself, but Azure Batch must have some additional dependency somewhere. I’d love to understand more if anyone can explain what’s going on here?

Autotuning Diabetes Settings in the Cloud: Part 2

The AutotuneWeb system described in Part 1 is made up of 4 services in Microsoft Azure working together as illustrated below:

AutotuneWeb Services

The App Service runs the website that users access. Written as an ASP.NET MVC application, the user enters their Nightscout URL and the application:

  1. Extracts the profile information, including the insulin:carb ratio, insulin sensitivity factor and basal rates
  2. Converts that profile information to the format expected by Autotune
  3. Displays the converted profile and the command line required to run it to the user

At this point we’ve covered the initial requirements to simplify access to Autotune. The remaining step is to actually run Autotune automatically and email the results back. To handle this we need a few more services:

The SQL Database stores details of each Autotune job that is requested, including a unique identifier and the email address to send the results back to.

The Storage Account holds the Autotune-format profile information and the results produced by Autotune.

The Batch Account processes Autotune using the profile stored in the Storage Account, places the results in the same Storage Account.

When Autotune has finished running, the App Service is notified by the Batch job. The MVC application then retrieves the Autotune results from the Storage Account and reformats it into a more easily-readable format using the same Razor engine used by the MVC views. The email address to send it to is retrieved from the SQL Database, and the email is finally sent using SendGrid.

Because Autotune is run using Azure Batch, the service can scale quickly based on the amount of jobs that users are requesting at any one time. An initial version that used the single VM I had set up originally to run only my own Autotune commands was quickly overwhelmed when more than 2 or 3 jobs were queued at once. By switching to Azure Batch I eliminated this problem and also reduced cost, as I could use low priority VMs for 20% of the cost of a regular VM. The scaling is handled automatically by a formula on the Batch Account pool that increases the number of VMs when there are more jobs waiting to run and removes VMs all the way down to zero when there is nothing waiting.

Want to run it yourself?

If you have a Nightscout site that you want to run Autotune against, you’re welcome to use AutotuneWeb to do it for you right now.

All the code for AutotuneWeb is available on GitHub if you want to run everything yourself or just see more detail about how any of it works. PRs welcome!

What’s Next?

I used a SQL database originally because that’s the technology I’m most familiar with. However, it’s probably overkill for what is required by this application and could be more efficiently replaced by a Table in the Storage Account. This is my first “cloud-native” application, so please let me know if there’s anything more I could do to make it more “cloudy”!

Autotuning Diabetes Settings in the Cloud: Part 1

Three years ago our first son was diagnosed with Type 1 Diabetes (T1D). Over the next few months we quickly got used to the process of calculating the amount of insulin required for meals and high blood sugar, and adjusting those calculations on a trial-and-error basis. As a software developer I felt sure there should be a better way.

Thanks to following Scott Hanselman on Twitter I stumbled across some mentions of the OpenAPS project – an open-source artificial pancreas system. Simply amazing stuff! The basics of the system are:

  1. A CGM system that monitors your blood sugar and sends a reading every 5 minutes to…
  2. A processing system running on a Raspberry Pi, Intel Edison or similar that runs all the same calculations as we were doing manually, and sending the results to…
  3. An insulin pump that delivers (or stops delivering, depending on the results of the calculations) the required amount of insulin

This seemed great, and as soon as we could get our hands on a compatible insulin pump we got up and running with the AndroidAPS system that uses the same core logic but simplifies the physical setup by using a standard Android phone as the “brain” and bluetooth communications with the CGM and pump.

However, there’s no clever machine-learning-y stuff going on with those calculations, it’s still essentially performing the same calculations as we always were, it’s just doing it for us automatically and reliably. While that’s definitely a step forward, if the ratios (the insulin:carb ratio that determines how much insulin should be required when eating, and the insulin sensitivity factor (ISF) that determines how much insulin should be required to bring down a high blood sugar level) and basal rates (the amount of insulin continuously delivered in the background) are incorrect then the results aren’t going to be as good as we’d hope for.

The process of refining what those settings should be normally seems to be done simply as trial-and-error, which can get very frustrating! Luckily, one part of the OpenAPS project is Autotune, a system that can run on historical data covering blood sugar, carbs and insulin and recommend changes to those ratios and basal rates based on hard data. Definitely much more appealing to my analytical nature!

The problem is, as a Windows and AndroidAPS user, I found it pretty difficult to use. The details of the ratios and basal rates currently in use weren’t stored in the format that Autotune was expecting, and I had to go back to the documentation each time to work out the format it needed and the details of the command line I had to run. I also had to run it on a Linux VM, just making it that bit more difficult and meaning I only ran it infrequently rather than as a regular process.

To try and make the process simpler for me, and hopefully for others in a similar situation, I started the AutotuneWeb project to:

  1. Automatically convert the profile information stored in Nightscout to the format required by Autotune
  2. Give me the command line details to run, and even,
  3. Run the command for me on my Linux VM and email me the results

I’ve been through a few iterations on how to get this to work, so in Part 2 I’ll run through the various parts that make up this system and how I’ve currently got it running in Microsoft Azure for minimum cost.