Skip to content

Best Practices for scripting

The executions of scripts can be a heavy time- and resource demanding job. This does not always have to be that way.

Use serverDocument as often as possible

In scripts, you have access to the clientDocument and serverDocument variables respectively. As the name indicates, the clientDocument variable contains the document on the client including all unsaved changes. However, it does not contain information that is not accessible for the user such as: * Fields to which the user does not have view permission * Access groups

This means that by saving the clientDocument to the database using a persistent.saveDocument method, you may loose information.

Therefore, it is a best practice to make changes to the serverDocument and save it to the database using persistent.saveDocument.

If you want to incorporate the changes that the user made on the client, you could simply add a Save Document rule to the user action. This ensures that serverDocument contains the latest changes.

Use function library

Often, scripts in one environment use the same methods over and over again. Rather than copying methods from one script to another, we could create a function library that contains these frequently used functions.

Creating a function library
This script contains functions that are used in multiple scripts. For instance, we could define a couple functions for mathematical operations:

function addNumbers(a, b){
    return a + b;

function substractNumbers(a, b){
    return a - b;

function multiplyNumbers(a, b){
    return a * b;

Include the function library
We can now include this function library in an existing script and use it:


// The variable addedNumbers will now equal 30.
var addedNumbers = addNumbers(10, 20);

If the need arises to change one of these methods, we only have to change the function in the library. All scripts that import the function library will instantaneously use the new function.

Limit the number of database interactions

Database interactions from scripts are relatively slow operations. It is therefore a best practice to limit the number of database interactions in scripts.

A good way to reduce the number of database interactions is to fetch all required in one go. Let's illustrate this by an example.

The not efficient way
Imagine that we have script that calculates the VAT of the lines on an invoice. Each invoice line contains a reference field to a trade item document. The VAT percentage field in trade item documents contains the VAT percentage that should be applied. We could script this as:

_.each(serverDocument.lines, function(line){
    // Retrieve the trade item
    var tradeItem = persistent.getDocument('tradeItem', line.tradeItem);

    // Calculate the VAT percentage
    line.vatPercentage = tradeItem.vatPercentage;
    line.vat = line.vatPercentage * line.priceExVAT / 100;

For each line in the document, the script retrieves the trade item from the database using persistent.getDocument. Even if multiple lines refer to the same trade item, the trade item is retrieved for each line separately.

The efficient way
We could also retrieve all trade items in one go:

// We retrieve the internal BizzStream id's of all trade items for which we need information.
var tradeItemIds = []
_.each(serverDocument.lines, function(line){

// We retrieve all trade items form the database in one go using persistent.findDocuments. The persistent.findDocuments method returns an array with trade items. We use the mongoDB $in operator to find documents that have IDs that are in the tradeItemIds array.
var tradeItems = persistent.findDocuments('tradeItem', {_id: {$in: tradeItemIds}});

// Calculate the VAT percentage for each line
_.each(serverDocument.lines, function(line){
    // Per line, we find the trade item in the array using the _.find method. This is much quicker than persistent.getDocument because the _.find method uses the array of trade items that is already in memory.
    var tradeItem = _.find(tradeItems, function(tradeItem){
        return tradeItem._id === line.tradeItem;

    // Once we know trade item, we can calculate the VAT percentage just like in the previous example.
    line.vatPercentage = tradeItem.vatPercentage;
    line.vat = line.vatPercentage * line.priceExVAT / 100;

If a document contains 100 lines, this approach is approximately 100 times faster than the previous example.

Preload data

Due to performance, a script can be provided with preloaded data. Preloaded data can been seen as a collection of document collection that can be used in scripts. By passing preloaded data to a script will eliminate the need for (relatively) slow persistent calls (for instance persistent.getDocument) because the data is already available.

The preloaded data can be accessed in a script by using; actionInfo.preloadData.

PreloadedData is an object in JSON format, that contains properties that represent a collection of documents per document definition:

order: [
    { name: 'pizza salami', price: 5},
    { name: 'pizza hawaï', price: 20}
menuItem: [
    { name: 'dough', quantity: 2},
    { name: 'flower', quantity: 1},

In a script we can use this object in a convenient way to fetch (for instance) all the preloaded menuItems:

menuItems = actionInfo.preloadData['menuItem'];
_.each(menuItems, function(menuItem){
 console.log('Hello menuItem',;
//Hello menuItem dough
//Hello menuItem flower

Please note that all documents that belong to the same document definition will contain a union of all the selected fields in the treeview (see configuration section)

You can add preloaded data to a script by following these steps:

  1. Go to Settings.
  2. Click on a Document Definition
  3. Go to Workflow.
  4. Click Action containing the execution of a script.
  5. Click on the Execute Script rule.
  6. Click the Preload Data button.

Use lazy methods

For a few scripting methods you can make use of a 'lazy' version. The difference with the regular method is the asynchronously execution, meaning that when using the lazy method the script does not wait for a response. For large datasets the reduced runtime can be significant.

These methods are: