Categories
salesforce

Salesforce Flow Tests are not useful enough

Salesforce made Flow Tests available first as beta in Summer 22 and generally available in the Winter 23 release. However, ever since that launch, it looks like Salesforce didn’t do enough to make this very useful. While Salesforce created documentation on the considerations/limitations for testing with flows 11, this doc does not go far enough to tell you the real problems you will hit.

One general important concept with testing is the ability to be integrated into a CI/CD process. That way you can repeatedly run those tests before/after each deployment, across multiple environments, scratch orgs, etc. None of that is possible (yet?), those tests can only be triggered through the Flow Salesforce UI, which defeats one of the major reason to use them. (Vote on IdeaExchange if you think It’d be useful!).

It is also impossible to measure your % of test coverage of your flow with the flow tests you have available or even enforce a minimum % of test coverage in your org. If you wanted flow test to help enforce some good testing guidelines within your fellow admins and devs colleagues, forget about it.

It seems that flow tests were designed with the idea that they would help you avoid having to constantly run Debug. We have all been there, we make a small change to a flow, we select a record, run the debug again, it fails, try again, etc. That’s a use case the flow test is helpful, to replace a debug you keep having to re-configure constantly. This works well when you have an existing record to use in that environment.

But it is quite bad once you start wanting to bring your flow test into your other environments and your scratch orgs: hardcoding record IDs into a flow test that goes on source control isn’t going to work. Flow test offers you the ability instead to create an initial record (Set Initial Triggering Record) or the subsequent updated record (Set Updated Triggering Record), which is somewhat a replacement to creating dummy records when writing your Apex unit tests. You can read the flow test docs to see how it works.

And that’s where all your real problems are starting. You can only create the triggering records from the object that originally triggered your record-triggered flow. But what if your object is tied with multiple lookups to other objects? And your flow needs those lookups to be filled? The UI doesn’t let you create those lookup records, you can only hardcode the record IDs, which is a no-go for deploying and testing across environments. So you can only use those flow test across multiple environments if your flow and your tests are really basic.

But where it does not work well at all, is when it comes to Person Accounts. While trying this in an environment with Person Accounts with a flow that is triggered by updates on the Account object, we encountered all kind of weird bugs such as

-The Account.PersonLeadSource input contains a field’s value that’s incompatible with the String data type.

Which is a field that was not even part of our Flow Test triggering records and that is not used at all in the environment.

In order to try to debug, you may pull the flow and the flow test metadata into your local project or package. Flow tests have their own XML metadata type format and end up within /force-app/main/default/flowtests/. Over there you could remove or update fields that are causing problems within the InputTriggeringRecordInitial and InputTriggeringRecordUpdated sections and manually push those flow test back into the org. But investigating those files, I realized why they never thought of Person Accounts when creating this feature: the RecordTypeId for the Person Account is hardcoded into the file. While that works in sandboxes for a single org (since those IDs are persistent across environment), this will never work into a package when installing across various scratch orgs or distinct orgs. You are better forgetting about Flow Test for Person Accounts as well.

I initially thought that Salesforce was finally adding some professionalism to flows, the ability to have tests that cover the major uses cases of your flow and ensure a new deployment into your environment or new updates to your flow does not accidentally create a new bug. Somewhat of a replacement to Apex Tests.

But it’s not that at all, it’s merely an improvement to the debug feature. And it’s fine if that is what you need, but we are still far from being able to fully replace Apex Classes and Apex Triggers on business critical features.

  1. The Considerations for Testing Flows documentation is a bit confusing because it mentions Flow tests don’t support flow paths that run asynchronously. However, the Winter 23 release notes mention Flow tests now support scheduled paths. Which I understand are 2 different things but within that context people might assume it might not work on a scheduled path. When you create the flow test, it also confusingly says A test can take only the flow path that runs immediately. However the UI does let you select a flow path that runs on a schedule. ↩︎
Categories
salesforce sfdx

Testing Salesforce Managed Packages (2GP) using Bitbucket Pipelines & SFDX scratch orgs

This post assumes you have access to a Salesforce Dev Hub

I’ve been working on a Second-Generation Managed Package (2GP) and one frequent issue is that the package needs to support both Person Accounts and the standard business accounts and contacts model. On top of that, the package also needs to support a Multiple Currencies environment and a standard one. That means that the same APEX code might work in an environment but not the other.

One great benefit of SFDX is that you can easily spun out a new scratch org that supports Person Accounts, deploy your code and run the tests, and ensure there are no errors thrown. However it is easy to work on your code for a few hours (or days!) in your standard business accounts and contacts model environment before you notice that your package is now broken with Person Accounts environments. That’s where a good Continuous Integration (CI) solution is a good way to remedy to those problems. You might want to spend some time on Trailhead for Continuous Integration Using Salesforce DX if you are unfamiliar with the concept.

I am using Bitbucket for source control so i’ll break down the necessary steps to setup Bitbucket Pipelines with Salesforce DX scratch orgs using the JWT authorization flow.

  1. Create a Private Key and Self-Signed Digital Certificate. You will end up with a server.crt and server.key file
  2. Create a Connected App within your Salesforce Dev Hub (the same you use to create scratch orgs for your development). You will assign your new server.crt digital certificate file to this connected app. Salesforce will provide you with a new Consumer Key.
  3. From your own environment, authenticate to your Dev Hub using the JWT-based flow using your Consumer Key, server.key file and the login username used with your Dev Hub. Important, logout sfdx force:auth:logout -u DevHub before you try to connect again to your Dev Hub with the JWT flow. The JWT to authenticate should look like this
sfdx force:auth:jwt:grant --clientid CONSUMERKEY --jwtkeyfile PATH/TO/server.key --username DEVHUBUSERNAMEEMAIL --setdefaultdevhubusername --setalias DevHub

Now that you have tested that you can connect to your Dev Hub using the Connected App, it’s time to get ready to set up your Bitbucket pipeline. Very conveniently, Salesforce is providing a pre-made test Salesforce Package you can use on Bitbucket to test the Bitbucket Pipelines before you configure it within your own existing package.

https://github.com/forcedotcom/sfdx-bitbucket-package

  1. Because you don’t want to store your key server.key that allows a direct access to your Dev Hub within source control (you could if you want, but it would be safer to avoid it), you want instead to encrypt it and provide the keys to decrypt it at runtime using Bitbucket Repository variables. You first want to generate a key and initializtion vector (iv), which will be needed for encrypting your key (and decrypting it later)

openssl enc -aes-256-cbc -k <passphrase here> -P -md sha1 -nosalt

That commande will provide you with a key and a iv value (keep note of them in a safe space).

  1. Encrypt your server.key file using the key and iv value
openssl enc -nosalt -aes-256-cbc -in server.key -out server.key.enc -base64 -K <key from above> -iv <iv from above>

You now have a server.key.enc file, which is what you will commit and store in your repository (remember not to store the server.key file in source control).

  1. Go within your Bitbucket Repository, go to Settings and then Repository variables. Add those 4 variables
    1. DECRYPTION_KEY
      • Should be the key value you generated
    2. DECRYPTION_IV
      • Should be the iv value you generated
    3. HUB_CONSUMER_KEY
      • The consumer key from your Salesforce Connected App on your Dev Hub
    4. HUB_USER_NAME
      • The username you use to connect to the Dev Hub
  1. Copy the bitbucket-pipelines.yml file to the root directory of your Salesforce managed package project and customize it. This is the configuration file for your Bitbucket Pipeline where all the scratch orgs are created, tests are run, package version are created, etc.
    1. Update the value of PACKAGENAME for the hardcoded ID value of your package. Which you can find within your sfdx-project.json or by typing sfdx force:package:list (this is the value that starts with 0Ho)
    2. Update the step Decrypt server key to point to the right location where you file server.key.enc is located in your repository (and make sure the --out argument output (where the decrypted certificate will be stored) is the same as the input for the step Authorize Dev Hub
  2. The last test run (Run unit tests on scratch org) step in the bitbucket-pipelines.yml file runs all the tests located on the scratch org that installed the package. However, because i’m working on a managed package in a Namespace environment, I had to change this line to manually define all the tests I want to run manually using the --tests argument otherwise the tests will not run.
  3. The file bitbucket-pipelines.yml is configured to execute every time you commit anything on any branch, of course you might want to change that. Read the doc on how to Configure your pipeline (I’ve changed it to only execute on schedule every night)
  4. After you commit all your files (bitbucket-pipelines.yml and server.key.enc), go to your repository, click settings, within Pipeline Settings click Enable Pipelines
  5. Your next commit should execute the pipeline! In order to confirm the execution was completed successfully, go within the Pipelines section of your repository.

If you encounter an issue with the pipeline running in your project and can’t figure out the problem, I’d recommend to create a new repository using the SFDX bitbucket package, create a sample unlocked package and practice on this one first.

Categories
salesforce sublimetext

Sublime Text 2 integration with Salesforce

Ever since I’ve been working on Salesforce development, using the official Salesforce Eclipse-based IDE on Mac has been very painful. Everything is extremely slow and that’s just not an environment that is enjoyable to use. Editing directly in the text editor of the Sandbox was my quick-and-dirty way of changing code without going through the process of launching this monster Salesforce IDE.

I was looking for some kind of plugins to do Salesforce development using Sublime Text 2. What I found was even much better than what I expected.

MavensMate IDE for Force.com on GitHub (OS X only)

Using the instructions available here, you use some commands in the terminal to install the add-on. It adds a new menu to Sublime Text exclusively for your Salesforce APEX development.

After configuring your Salesforce user/password (with your security token appended to the password), you decide which objects metadata to load locally and you are good to go. You have access to your classes, tests, triggers, visualforce, etc..

You can also run your tests, verify code coverage of your tests, and you get auto-completion on your code using the metadata loaded previously.

Just the same as if you were using the Salesforce IDE, but much better integrated in your Sublime development environment!