Dataverse: Power Automate vs Plug-ins

Dataverse Power Automate Vs Plugins

If you have clicked on this article, it is not only due to this attractive title, but also because you have certainly asked yourself this question or simply because this subject may have come up in a discussion among your colleagues. If it is not for one of these reasons, you may never have had an issue with it, so you’ve come to the right place to save your time!

As you can imagine from this introduction, this is not a new topic in the community, and you can find other articles on this subject. My desire to write about this topic is also a result of Alex Shlega’s article: Dataverse dilemma: should it be a flow or should it be a plugin?. My goal is just to pose some thoughts/guidelines and remarks to bring you on the right path.

As a reminder, we are in a Dataverse/Power Apps context and therefore some information may not be accurate in your context or may have changed in the meantime. Furthermore, I am expressing my opinion based on my experience and my feelings.

Capacities Overview

It doesn’t matter what the subject is, but when trying to compare two things, whether they are technical features or any other aspect in life, it is necessary to look at their capabilities from a certain distance and perspective, their advantages, disadvantages, and then compare them to make sure that you see something positive.

Power Automate

I’m not going to list everything or highlight all the capabilities and interoperability that Cloud Flows allows but highlight some points that I think are worth to be considered in this approach.

Performance Profile

One of the first things to understand is that depending on the license applied to a Power Automate, its performance will not be the same! Yes, you read it right! Even if it is quite logical to not have the same performance with a Free Plan and a Per Flow Plan, this notion of performance profile remains quite important to consider in the case of high frequency, use of many actions, sending of consequent data or just the maximum number of items to browse in a for each.
In the case of “Apply For Each”, the performance profile can have an impact because if you are using a Low profile, you will be able to process only 5k items against 100k for the others.

What is quite interesting is that Microsoft indicates through the implementation of these performance profiles that their use is well contextualized to the licenses, and therefore it may be necessary to switch to a standalone Power Automate plan to have all the capabilities of this service. So don’t hesitate to keep an eye on this 😉

For your information, you can at any time know which plan is assigned to you by navigating to the Flows tab from the creators’ portal and clicking on ALT+CTRL+A. Then you just need to find the license SKU ("licenseSku": "P2") that has the property "isCurrent": true.

Characteristic and specific points to keep in mind

During my various research and experimentation, I have kept in mind several points and limitations that I consider essential to know when talking about/implementing Cloud Flows.

  • Trigger(s) and Connector(s): One of the great strengths of Power Automate is the diversity of connectors that are at our disposal and that allow us to interact with various systems at a lower cost! For example, we think of the possibility of implementing mail/SMS notifications within a Dataverse context without necessarily having set up the OOB Exchange integration or a real integration with a custom SMS provider (setting up a mail/SMS system would require a lot of work using plug-ins, including security issues etc..). The other aspect that is interesting in the Dataverse context is the different triggers that can be used, such as recurrence, the Power Apps Trigger (especially with the use of Custom Pages, which reconciles us with the implementation of PopUp) and now business events.
  • No. of items: Limitations in terms of how much action (500 maximum), variable (250 maximum) and nested action (depth of 8) we can use in a Cloud Flow seem quite correct (if you start to reach these limits you have to question your technical design or try to cut out your logic!).
  • No. of items executed over time: Even if the number of actions that can be contained in a Cloud Flow is quite impressive and enough, there are also limits to the number of actions that can be executed (no matter which ones, we are talking about all added steps not being the trigger) over a given time of 5 minutes and 24 hours. We are talking about the number of actions that a Cloud Flow can perform, and it’s not linked to a user (additional limitations apply depending on the connector used and so on the user). For the 24 hours period, the performance profile acts directly on this limit: 10,000 for Low, 100,000 for Medium and 500,000 for High. 
  • Runtime: It can be up to 30 days (if asynchronous request, otherwise it must be executed under 120 seconds), which is interesting, especially when implementing a waiting logic according to a certain date or when using waiting actions like approvals for example, and the recurrence between 60 seconds and 500 days (very useful when you must implement planned logic).
  • Concurrency: Another exciting point is the possibility to manage concurrency on several things, the first is the trigger where we can totally control the number of parallel executions which is for example particularly useful when you have a recurrent logic whose processing time could be longer than the recurrence itself (you can decide to activate this function and thus configure this number from 1 to 50). The second point is related to the actions themselves and to the “Apply to each” which can also execute its content at the same time or in parallel up to 50. We can therefore imagine a scenario, for example, for the use of approvals or adaptive cards with a response, where we can send 50 cards at the same time and wait for the response from each one (be careful, because here we will block a flow if there is at least one pending response). Another point is that it is possible to use parallel branches to multiply these treatments, which can also be parallelized!
  • Failures: There is a policy of disabling the Cloud Flow if too many failures occur over a 14-day period, but no alert will be received, however it is an obvious idea to stop it to avoid causing further problems and potentially fixing corrupted data. If a Cloud Flow starts to reach a certain limit, this will lead to slowing down the Cloud Flow as long as it does not return to the right threshold according to its performance profile, an email will be sent to the owner informing him that there is an issue, and it will also be deactivated after 14 days. This policy and the fact that the failures are only visible on the Flow itself will therefore encourage us to monitor flows (e.g using CoE) or fine-tune error management by implementing specific patterns via the Scope control: Error handling model in Power Automate. You can also check this blog post to Get notifications when flows fail using Power Shell.
  • Retry Process: The last point I keep in mind is the implementation of a retry policy for actions that support it (as the HTTP Trigger for example) which allow us to define the number of retries as well as the minimum and maximum retry delay: Handle HTTP request failures in Power Automate

If you want a complete list of all the limitations and capabilities, without mentioning all the connectors but only the service itself, you can check this official documentation: Limits for automated, scheduled, and instant flows

Plug-ins

If I started by discussing the low-code part with Power Automate, I must admit that it is because it was more interesting for me to go deeper into this subject than the historical development of Dataverse that I have been able to deal with for years. In the same way, I wanted to highlight certain points and characteristics of this type of development.

  • Runtime: All backends developments must ensure to be finished in 2 minutes.
  • Execution Pipeline: Possibility to choose at which stage the development will be executed, which makes it possible to act before the database transaction but also to manage the sequencing and to obtain a granularity at the function/code level.
  • Execution Mode: Possibility of executing logic in both synchronous and asynchronous ways, allowing the user to be warned of an error or simply to display a result in real time.

It is necessary to be vigilant on these last two aspects to understand that a development triggering another logic in a synchronous one will thus be part of the same transaction, so if one fails it is the whole transaction that fails (which can be a strategy itself).

  • Error handling: Implementation of try/catch/finally blocks to handle errors, standard error tracing using the TracingService for Dataverse and the ability to integrate natively with Application Insight to get a centralized view of errors. On the other hand, it can be very tedious to have to manage a retry process to re-launch a process in the same context.
  • Code Architecture: Implementation of shared projects and use of PluginBase to optimize development but also to capitalize on common & reusable functions. Organization of functions/methods for a single event increases the readability of the code (for example, you can search for a technical field name and find all the associated business logics).
  • External Communication: Communication to external web services requires the implementation of specifics classes (HttpClient, WebRequest..) and authentication management (Token, MSAL…).
  • Image(s): One of the main advantages of this type of development is also the possibility of using Pre and Post Images allowing us to have a “state” (=image) of the record before and after the operation and thus to carry out controls but also to avoid retrieving the same record because an attribute would not be present in the context.
  • Trigger(s): As you already know, Dataverse implements an event framework that allows us to detect when an event occurs on the server (so also when using WebApi) based on OOB messages or custom messages implemented through Custom APIs. We can extend the message by selecting a specific table, mode, stage but also specific fields when an update occurs. Apart from this kind of case, we do not have a trigger to capture an external event.
  • Deployment & Versioning: Plugins are solution aware, so there is no problem to deploy them unless there are specifics to contextualize to an environment, but this implies using environment variables or settings at the Model Driven App level. Versioning is quite simple when you work with a version controller with a branching strategy. You can easily compare the same file on two different commits.

My Views

Let’s start now one of the best parts of this article, where I will try to highlight some problems/scenarios, often observed in real cases, for which it is necessary to reflect on this choice or to understand what it implies but also to put forward some recommendations.

The first point that comes to mind is of course maintainability. I think that any consultant who has worked on recent projects could have started to introduce Cloud Flows or even achieved a project only using this kind of component. Moreover, the democratization of Cloud Flows is going on at the same time as the growing popularity of Low-Code / No-Code. As a result, one can quickly find oneself with a mountain of Cloud Flows, for example, I was able to observe a project with about 200 Cloud Flows with the Dataverse trigger and performing only Dataverse actions. You can understand that it becomes extremely complicated in this kind of situation to understand what exactly each flow does. Added to this is the fact that the more consequent the logic is, the less obvious the Cloud Flows are to read (we are thinking of the fact that you must scroll to read the actions or the ForEach loop sequences etc.). This case is particularly problematic when we try to rationalize the triggers, for example when we have X Cloud Flows which are triggered on the update of the same field. It then becomes complicated to know which one will execute before the other (which could impact the others) and to understand the different logic implemented/triggered. Understanding the different components implemented is crucial on a project, because if you start to have a mix of components triggering on the same trigger (Cloud Flows, Workflows, plug-ins..) it will become complicated to investigate.

To avoid this kind of problem, I strongly recommend that you establish a naming convention at the beginning (you can still do it after, but it will be tedious…) like for example “TABLE TRIGGER – CRUD OPERATION – FUNCTIONAL ACTIONS” which would give: “ACCOUNT – UPDATE – UPDATES ADDRESSES OF RELATED CONTACTS”, you can of course be more granular according to the need. Proper documentation of the implemented logics with reference to the technical field names is also a key element to easily find the implemented logics for a specific field. (There is an existing tool if you want to generate a Visio file: Flow To Visio – XrmToolBox Addon). In case you are extending an existing project, put aside your preferences and your opinion to avoid mixing components when it is not necessary. If there are only Clouds Flows, avoid adding a “layer” of plug-ins unless your problematic requires it of course 🙂 .

The second point is simply the capacities of these two components which are complementary and not opposed. It is undeniable that the different connectors that Power Automates provides are a real accelerator for communicating with other systems compared to the same implementation in C# and even more now with environment variables that offer a native integration with Azure Key Vault (feel free to check this blog post: Azure Key Vault Secrets in Dataverse). However, you should not fall into the trap of implementing Cloud Flows to meet integration requirements requiring scalability/performance (in this case, Logics Apps are still preferred). Another example, often encountered unfortunately, is to avoid overloading the Dataverse native SharePoint integration via Cloud Flows to generate folders as soon as a record is created, this forces the user to wait for an indefinite time without getting any feedback from the Cloud Flow itself. It’s a good thing to be able to resubmit an execution of a Cloud Flow which is not possible for a plug-in (at least in standard) but I must admit that the use of Pre/PostImages often force me to stay on plug-ins (let’s not forget that we must imperatively perform a Get Record, in the case of a Cloud Flow with an update trigger, if we want to retrieve other information from the same object, which does not guarantee that the information has not been altered in the meantime!). I couldn’t mention the notion of capacity without talking about the Synchronous mode which, as you know, is only possible via Plug-ins. Even so, some possibility remains like the use of Power Fx coupled with a Cloud Flow Power Apps Trigger, which would be triggered from a button of the command bar or from a button of a custom page/canvas app embedded, but this does not cover all cases.

Another aspect is reusability which is particularly applicable to projects of a certain size where we can capitalize on certain logic / patterns because they will be reused in other places. In this case, we set up shared projects or common classes on the Pro-Dev side that we can transcribe into Child Flows in the Power Automate universe, but you will run into readability and maintainability issues at some point because it will generate a multitude of Cloud Flows. Now, there are also intermediate scenarios where we can set up Custom APIs with the objective of making complex logic (either impossible via Cloud Flow or requiring a certain robustness) available to the makers.

Finally, a last point is obviously the skills because the pro-dev option will indeed require specific skills while we remain on Low-Code for the Cloud Flows. Nevertheless, let’s not forget that many good practices exist to ensure not only the efficiency but also the robustness and consistency of the latter (e.g: Using Filtering Conditions).

As you may have realized, there is no miracle solution, but the goal is to determine the best solution according to a context, based on a certain number of variables, and by taking a certain level and not just your personal preference 😉

Leave a Reply

Your email address will not be published.