Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Process flows are built by dragging and dropping shapes onto a canvas, and then configuring those shapes to work in the way you need to exchange data between connector instances.
Process flows are extremely flexible. You can build something very simple to sync data between two instances with standard field mappings - or build more complex flows, perhaps using custom scripts and/or routing data to different paths based on given conditions.
Before you start building a process flow, make sure that you've installed a connector and added your required instances for any third-party applications that you want to use.
In this section, we share some insights on best practice for different aspects building process flows.
When building and testing your process flows before going live, a common requirement is to build process flows that connect to development/staging instances of your third-party systems - this ensures that you're not working with live data during the testing phase.
This page details our suggested procedure for managing this approach in terms of:
Our suggested practice for adding process flows when you want to work with different environments is detailed in the following sections:
Step 1 Install or build required connectors for third-party systems to be integrated in your process flows.
Step 2 Add any required instances of these connectors and specify your DEV/STAGE credentials (i.e. credentials that allow you to access data in your DEV/STAGE environment for the third-party system). It's a good idea to indicate DEV/STAGE as part of the instance name. For example:
Step 3 Add any required instances of these connectors again, this time using your PRODUCTION credentials (i.e. credentials that allow you to access data in your PRODUCTION environment for the third-party system). It's a good idea to indicate PRODUCTION as part of the instance name. For example:
Step 1 Create a new process flow with the required name and description. You may wish to indicate DEV/STAGE as part of the process flow name but this isn't essential if you follow subsequent steps to apply labels.
Step 2 From the process flow canvas, access process flow settings.
Step 3 Move down to the labels section and apply a label that indicates that this process flow is configured for your DEV/STAGE environment - for example:
If required labels don't exist, you can create them 'on the fly' from here, or go to settings > labels for label management. More information is available in our Process flow labels section.
Step 4 Build the process flow. When configuring connection shapes, ensure that you select the DEV/STAGING instance:
Step 5 Build the rest of your process flow as required. If you are required to select instances for other shapes in your flow, always ensure that you select the DEV/STAGING instance.
Step 6 Test carefully to ensure that the right data (i.e. DEV/STAGING data) is processed as expected.
Step 1 When you're satisfied that your process flow is working correctly, duplicate the version that you want to use in your live environment.
Step 1 Edit the duplicated process flow and access process flow settings.
Step 2 Update the process flow name as required. You may wish to indicate PRODUCTION as part of the process flow name but this isn't essential if you follow subsequent steps to apply labels.
Step 3 Move down to the labels section. Remove the existing DEV/STAGE label and replace it with a label that clearly indicates that this process flow is configured for your PRODUCTION environment - for example:
If required labels don't exist, you can create them 'on the fly' from here, or go to settings > labels for label management. More information is available in our Process flow labels section.
Step 4 Save changes.
Step 5 For every connection shape in the process flow, access settings and change the DEV/STAGING instance to the equivalent PRODUCTION instance:
Step 6 Check all other shapes in your process flow - if instance details are present, ensure that the PRODUCTION instance is selected.
Step 7 Check other shapes in your process flow. Typically, any instance information defined for a connection shape is inherited by subsequent 'child' shapes (which require instance details) in the flow. However, it's always worth double-checking before going live.
Step 8 When you're ready, deploy and enable the process flow.
At this point, you now have different 'environment flavours' of the same process flow - for example:
Keep in mind that your Core subscription tier determines the number of active (i.e. deployed and enabled) process flows that are allowed for each company profile.
Once your PRODUCTION version is in place and running, we advise that you disable the DEV/STAGE version but leave it in place to test any future updates.
If you need to update a PRODUCTION process flow, there are two options, summarised below.
Edit the draft version of the existing PRODUCTION process flow and deploy changes once you're satisfied that the flow is running correctly.
This approach requires caution! Connections in the flow are configured for your live data, so you need to be sure that you've adjusted relevant filters during the testing phase.
With care, this option is fine for smaller changes. For more complex changes, Option 2 is the safest approach.
Edit and test the existing DEV/STAGING version of the process flow. Once testing is complete, follow steps in Phase 3, Phase 4 and Phase 5 above to duplicate the process flow and configure PRODUCTION connections. Having done this, you can either:
Remove the 'old' PRODUCTION process flow
To add a shape to a process flow, click the + icon at the point you want to place it - for example:
...then choose the type of shape to add:
Depending on the shape, the settings panel will either open immediately so you can provide details before the shape is added to the canvas, or the shape is added to the canvas and you can update settings when you're ready.
To access settings for an existing shape, click the associated 'cog' icon - for example:
The settings panel is displayed, so you can configure the shape as required - don't forget to save changes.
Settings vary for each shape - please see our Process flow shapes section for more information.
When a shape is dropped into the canvas, it's labelled with a generic name - 'map', 'flow', 'split', etc. Sometimes it can be useful to modify these names to something more specific - for example, to give a hint of what the shape's purpose in your flow (particularly if you have multiple shapes of the same type!).
To change the name, simply access shape settings, then click the 'edit' icon associated with the name at the very top of the settings drawer - for example:
The name field can now be edited - update the current name as needed, then click save at the bottom of the settings drawer:
To remove a shape, click the associated 'cog' icon - for example:
...then click the delete option in the settings panel - for example:
The flexibility of process flows means that there's no 'one size fits all' approach - everyone's requirements are different, and the scope is huge. This level of flexibility is a great advantage but on the flip side - where do you start?
Here, we outline the bare bones of a process flow so you know what to consider as a minimum when getting started for the first time.
A scratchpad area will be available soon. In the meantime, we suggest registering for a sandbox account and experimenting there.
Make sure you create instances with credentials for your third-party application sandbox accounts, rather than live ones!
In their simplest form, process flows are defined to receive data from one third-party application and send it to another third-party application, perhaps with some data manipulation in between. Key elements are summarised below.
Process flows allow you to build highly complex flows with multiple routes and conditions. Here, we're considering an entry-level scenario to highlight key items as you get started with process flows.
Process flow can be associated with three version types: draft, deployed and inactive. Before you get started building process flows, we advise reading our Process flow versioning page to make sure you understand how this works.
The process flow canvas is where you build and test your process flows in a smart, visual way. This is where you define if, when, what, and how data is synced.
The process flow canvas has four main elements - a title, an actions bar, a shapes area, and a (hidden unless activated) options panel:
The process flow title bar shows the name of the process flow, as specified when it was created. The number above the title is the process flow id and the number below is the version number. To change this title, use the settings option from the actions bar.
Options in the actions bar are summarised below:
The main 'shapes area' is where you build your process flow. Start by clicking the + sign associated with the trigger shape, then choose the required shape for your next step, and start building!
We're adding new shapes all the time! For information about working with shapes, please see our Process flow shapes section.
When you access process flow settings or to configure a shape in your flow, available settings are displayed in a panel on the right-hand side. For example, when we choose to access settings for a trigger shape, available trigger options are displayed:
Patchworks process flows are incredibly flexible. With a range of shapes for receiving, paginating, manipulating, batching, splitting, caching and sending data, you can build highly complex flows in a matter of a minutes.
With this in mind, it's important to understand how data flows through shapes.
In the simplest of scenarios, a process flow receives a single payload of unpaginated data and this flows all the way through to completion with no manipulation or batching - one payload is received, processed, and completed.
However, if your incoming data is paginated and/or you introduce shapes capable of generating multiple payloads, it's important to understand how these pass through the flow. Essentially, any payloads that a shape outputs are added to a 'bucket' and it's that bucket that is then passed to the next shape.
So, all payloads from one shape are passed to the next shape in the same context - they don't pass down the entire flow individually.
If a 'pull' is configured to use an endpoint that paginates data, the connection shape outputs each page in its own payload.
The animation below shows how this works.
With some exceptions (detailed below), a further three attempts will be made if a process flow shape fails. Exceptions are summarised below:
This page summarises best practice insights for for working with scripts in process flows. Keep in mind that scripts come in two 'flavours':
Payload scripts. A is configured to run a given script on the incoming payload.
Transform scripts. A is added to a field mapping (in the ) - this runs a given script on the associated source field before the mapping is completed.
Generally, doing everything you can in a single script is more efficient than processing multiple scripts - it means less downtime between steps, and less chance of being caught in a queue between scripts.
There are times where deploying multiple scripts is preferable from a management perspective - for example, if you're designing generic scripts to use across multiple process flows. In this case, you trade modularity for speed, in most cases.
Payload scripts (i.e. scripts run via the ) are more efficient than .
Effectively, a transform script pauses the map shape, calls the script, then merges that response and continues with the map. Multiple script transforms results in multiple pauses - similar to running multiple scripts back-to-back in the process flow itself.
The allows you to edit scripts and test with a given payload, but for detailed debugging, we advise using an IDE with a debugger to step through the code.
When you add a process flow, you're given a new, blank to start .
A process flow can be as simple or as complex as you need, and you can add as many as you like - maybe one flow will fulfil all of your requirements, or maybe you'll need several to achieve different things.
To add a new process flow, follow the steps below.
Step 1 Log in to the , then select process flows > process flows to access the manage process flows page.
Step 2 Click the create new flow button:
Step 3 Enter a name for this flow, then click the next step button.
It's advisable to use a name that reflects the aim of this process flow. Names must be three characters or more.
You can use the assert shape is typically used for testing purposes -you can define a static payload which is used to validate that the current payload (i.e. the payload generated up to the point that the assert shape is encountered) is as expected.
To view/update the settings for an existing assert shape, click the associated 'cog' icon:
To configure an assert shape, all you need to do is paste the required payload and save the shape for example:
An assert shape can only be saved if a payload is present. If you add an assert payload shape but don't have the required payload immediately to hand, you can just enter {}
and save.
Option | Summart |
---|---|
The ability to update some settings will vary, depending on the . For example, you can't add flow variables to a deployed version.
# | Item | Summary |
---|
All shapes (except the ) have a set timeout of 30 minutes. If processing is not completed within this time, the shape fails.
The timeout for a connection shape is configurable via .
Shape | Number of retries |
---|
Step 4 Version 1 of the new process flow is created and displayed on the - for example:
This is a which you can use to . Once you're satisfied that your process flow is working as required, you can deploy it for use. For more information about this process please see our page.
All process flows must begin with a trigger - what event is going to trigger this flow to run? For this reason, all new process flows are created with a already in place. You should edit this shape to apply your required settings (for example, to set a schedule).
This opens the - for example:
Shapes detailed in this section are available as standard, for all core subscription tiers:
Process flow settings
Use this option to access settings for this process flow - these are used to manage settings for this process flow as a whole. For more information please see Process flow settings.
Return to trigger
If you're working on a longer process flow, use this option to quickly jump back to the start (i.e. back to the trigger shape).
Use this option to run the process flow immediately. For more information please see Running a process flow manually.
Use this option to provide a payload to be passed in before the process flow initialises. This is particularly useful if you want to test how a flow will run with expected data from a Patchworks API request.
If a process flow is running, you can use this option to stop the current run. Stopping a run in this way triggers the flow to stop at its next step however, if an API call or script has already been triggered, the process flow will stop after these have completed. With this in mind, it's important to check any target connections to clarify what (if any) updates have been made after a process flow has been stopped.
As a process flow runs, you can view progress in real time; you can also check logs and view payloads at any stage.
1 | Process flow name & description | The name displayed for this process flow throughout the system. Optionally, you can include a description. |
2 | Queue priority | If required, you can use this dropdown field to select the priority in which runs for this process flow are picked from your queue. Choose from:
|
3 | Enabled toggle button |
4 | Use queued time | Some process flow steps (connectors, filters, set variables, transforms, etc.) can be configured to use dynamic/relative dates, where the date is relative to the time that the variable is used in the process flow. To prevent cases where filtered records are omitted because they were added between the time a flow was initialised and the time it left the queue to run, the use queued time process flow setting can be used. This allows you to choose whether any relative dates should be based on:
This option defaults to Example To find all records created in the last 2 hours, a relative date variable is configured as: At 12:00 the process flow enters the queue. At this point, the value of this variable would be: At 14:00 the process flow leaves the queue and runs. At this point, the value of this variable would be: So:
|
5 | Remove failed payloads |
6 | Production flow | If this option is toggled ON, Patchworks will receive alerts if a run fails for this flow, which may be analysed to understand trends and areas for future enhancements. |
7 | Labels |
8 | Email failure notifications |
9 | Deploy |
10 | Variables |
11 | Versions |
1 |
2 |
1 |
1 |
1 |
Typically, a process flow run is triggered and a request for data is made via a connector shape - if the request is successful, data is retrieved and the flow continues.
However, there may be scenarios where you need to control whether the connector shape or process flow run should fail/continue based on information returned from the connection request. To achieve this, you can apply a response script to your connector shape.
When a response script is applied to a connector shape, the script runs every time a connection is attempted. The script receives the response code, headers, and body from the request and - utilising response_code
actions - returns a value determining whether the connector shape/flow run continues or stops.
Response scripts are just like any other custom script, except they receive additional information from the request - see lines 11 to 14 in the example below:
To implement a response script, you should:
Response scripts are written and deployed in the usual way, via the custom scripts option. However, two additional options can be used for scripts that you intend to apply via connector shapes: response_code and message.
These options are only valid when the script is applied to a connector step as a response script
.
The response_code
determines how the process flow behaves if a connection request fails. Supported response_code values are:
The message
is optional. If supplied, it is output in the run logs.
To apply your response script, access settings for the required connector shape and select your script from the response script dropdown field.
Here we handle the scenario where a connection response appears OK because the status
code received is 200
, but in fact the response body
includes a string (Invalid session
) which contradicts this. So, when this string is found in the response body, we want to retry the process flow.
In this case we return a response_code
of 2
with an message
of Invalid session
:
Here we show how the payload
received from a connection request is checked for an order number and an order status - retrying the the process flow if a particular order status is found:
There may be times when you need to define a filter based on incoming data matching one of many given values. Conversely, you might want to define a filter based on incoming data NOT matching one of many given values. This can be achieved using the following operators in string-type filters:
Using these operators, you can specify a comma-separated list of values that a record must have/not have in a string-type field, to be a match.
This information applies wherever filters are available, not just the filter shape.
These operators are designed to work with string-type fields only.
The contains one of many operator is used to match incoming records if the value of a given field DOES match any item from a provided (comma delimited) list of values. For example, consider the following payload of customer records:
Suppose you only want to process customer records with a European country code in the country
field (which is a string
type field).
You can add a filter for the country
field and select the contains one of many
operator - then provide a comma-separated list of acceptable country codes as the value
:
The resulting payload would only include records where the country
field includes one of the specified values - i.e.:
The does not contain one of many operator is used to match incoming records if the value of a given field DOES NOT match any items from a provided (comma delimited) list of values. For example, consider the following payload of customer records:
Suppose you only want to process customer records that do NOT have US or AU in the country
field (which is a string
type field).
You can add a filter for the country
field and select the does not contain one of many
operator - then provide a comma-separated list of unacceptable country codes as the value
:
The resulting payload would only include records where the country
field does NOT include one of the specified values - i.e.:
When defining your 'many values' list as the value
for a contains one of many
or a does not contain one of many
filter, there are a couple of things to keep in mind:
It's important that your 'many values' are specified as a comma-separated list - so in our example:
...will match as required, but:
...will NOT match as required.
Any spaces included in your 'many values' list ARE considered when matching. For example, consider our original payload:
Suppose we are using the contains one of many
operator to match all European countries and the value field is defined as below:
Notice that the final IE
list item is preceded by a space. This means that our filter will only match the IE
country code if it's preceded by a space in the payload, so the output would be:
Our IE
record (line 8 in the payload) isn't matched because there's no space before the country code in the country
field.
The Patchworks FTP connector is used to work with data via files on FTP servers in process flows. You might work purely in the FTP environment (for example, copying/moving files between locations), or you might sync data from FTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an FTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different FTP server location
This guide explains the basics of configuring a connection shape with an FTP connector.
When you add a connection shape and select an FTP connector, you will see that two endpoints are available:
Here:
FTP GET
is used to retrieve files from the given server (i.e. to receive data)
FTP PUT
is used to add/update files on the given server (i.e. to send data)
Having selected either of the two FTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
For information about these fields please see our Configuring SFTP connections page - details are the same.
With our process flow versioning system, you can be sure that a process flow that's currently deployed will never be edited (possibly with breaking changes) while it's in use.
To edit a deployed processed process flow, you take a copy as a draft and work on that - when you're ready, you can then deploy your draft.
Each time that you deploy a new version of a process flow, the previously deployed version is saved as an inactive version for future reference and, if required, future use.
For any process flow, there's always one draft version, one deployed version, and any number of inactive versions.
At any given time, a process flow can be associated with one of the following version types:
Version | Is set when... | Can be edited? | Transitions |
---|---|---|---|
There is always one draft version of a process flow. The draft version can be edited freely without any possibility of changing or breaking the version that's currently deployed. With a draft version, you can add/update shapes, and change the process flow name.
Any trigger shape settings defined for a draft version are ignored - draft versions are never triggered to run automatically.
When you're working with a draft version of a process flow, you can take the following actions:
Enable/disable the process flow. If you enable a process flow when viewing a draft version, there's no impact on the draft version. However, the deployed version will start to run automatically as per its trigger shape settings.
Run manually. Use this option to run the draft process flow immediately.
If you choose to run the draft version of a process flow manually, the draft version runs and any target connections will be updated. Where possible, it's always best to use sandbox connections when you're editing and testing draft process flows.
The deployed version of a process flow is the one that's currently in use (if it's enabled) or ready for use (if it's disabled).
The deployed version of a process flow cannot be edited - shapes can't be added/updated, and you can't change the name. The only actions that you can take with a deployed version of a process flow are:
Enable/disable the process flow. Just because a process flow version is deployed, it doesn't necessarily mean that it will be triggered to run automatically as per trigger shape settings. For this to happen, a process flow must be both deployed AND enabled.
Run manually. Use this option to run the process flow immediately.
Copy to draft. When you do this, the process flow remains deployed and an exact copy is taken as the current draft version, ready for you to edit - the existing draft version is discarded. This is a good solution if you've been editing a draft but reached the point where you need to restart from a known sound point.
Each time a draft version of a process flow is deployed, the previously deployed version becomes an inactive version - so you have a full version history for all deployed versions of a process flow.
An inactive version of a process cannot be edited - shapes can't be added/updated, and you can't change the name. The only actions that you can take with an inactive version of a are:
Enable/disable the process flow. If you enable a process flow when viewing an inactive version, there's no impact on the inactive version. However, the deployed version will start to run automatically as per its trigger shape settings.
Run manually. Whilst you can use this option to run the process flow immediately, it's not recommended.
Copy to draft. When you copy an inactive version to draft, an exact copy is taken as the current draft version, ready for you to edit (the existing draft version is discarded). There's no impact on the deployed version.
Deploy. When you deploy an inactive version, it becomes the currently deployed version. The previously deployed version becomes a new inactive version, and the existing draft is not affected.
If you run an inactive version of a process flow manually, the inactive version runs and any target connections will be updated.
You can view all versions of a process flow via the settings panel.
When you access a process flow, the version being viewed is noted in the title bar. If you are viewing a deployed or inactive version, you'll see a message advising that edits cannot be made, and the version number is displayed beneath the title.
The version number is not the same as the version id.
To switch between different versions of a process flow, access the versions list and select the required entry.
Deploying the draft version of a process flow - or deploying an inactive version without editing it as a draft first - is a simple one-click operation from the versions list.
If you want to edit the currently deployed version of a process flow - or an inactive version - you must first copy it to draft. The existing draft version is replaced by the version you copy.
Mappings are at the heart of Patchworks.
When we pull data from one system and push it into another, it’s unlikely that the two will have a like-for-like data structure. By creating mappings between source and target data fields, the Patchworks engine knows how to transform incoming data as needed to update the target system.
The illustration below helps to visualise this:
The map shape includes everything that you need to map data fields between two connections in a process flow. When you start to create mappings, there are two approaches to consider:
Having added a map shape to a process flow, click the associated 'cog' icon to access settings:
The generate automatic mapping option is used to auto-generate mappings between your selected source and target connections.
Once auto-generation is complete, mapping rows are added for all fields found in the source data - for example:
If yes, your custom connector will behave like any of our prebuilt connectors when it comes to auto-generated mappings, adding fully mapped rows for all matched tags.
It's very easy to add individual mapping rows manually, using the add mapping rule option:
The flow control shape can be used for cases where you're pulling lots of records from a source connection, but your target connection needs to receive data in small batches. Two common use cases for this shape are:
A target system can only accept items one at a time
A target system has a maximum number of records that can be added/updated at one time
The flow control shape takes all received items, splits them into batches of your given number, and sends these batches into the target connection.
Step 1 In your process flow, add the flow control shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be split originates - for example:
Step 3 Move down to the batch level field and select that data element that you are putting into batches. For example :
The data structure in this dropdown field is pulled from schema associated with the source. If your data is received from a non-connector source (e.g. manual payload, API, webhook, etc.) then you can toggle ON the manual input option and enter the data path manually).
Step 4 In the batch size field, enter the number of items to be included in each batch. For example:
Step 5 By default, the payload format is auto-detected but you can set a specific format here if you prefer:
Step 6
If you're creating batches of one record, you can toggle ON the Do not wrap single records in an array
option if you want the output to be this:
...rather than this:
Step 7 Save the shape. Now when you run this process flow, data will be split into batches of your given size.
If you check the payload for the flow control step after it has run, you'll see that there's one payload for every batch created. For example:
When a connector is built, default filters can be applied at the API level, so when a process flow pulls data, the payload received is has already been refined.
However, there may be times where you want to apply additional filters to a payload that's been pulled via a - for example, if the API for a connector does not support particular filters that you need.
The filter shape works with a source payload. As such, it should be placed AFTER a in process flows.
When specifying a filter value, the maximum number of characters is 1024.
To view/update the settings for an existing filter shape, click the associated 'cog' icon:
Follow the steps below to configure a filter shape.
Step 1 Select a source integration and endpoint to determine where the incoming payload to be filtered originates.
Step 2 Click the add new filter button:
Step 3 Filter settings are displayed:
From here, you can select a field (from the data schema associated with the source endpoint selected in step 1) - for example:
Alternatively, you can toggle the manual input option to ON and add a manual path.
Step 4 Use remaining operator, type and value options to define the required filter.
Step 5 Use the keep matching? toggle option to choose how matched records should be treated:
Here:
If the keep matching? option is toggled OFF, matched records are removed from the payload before it moves on down the flow for further processing.
If the keep matching? option is toggled ON, matched records remain in the onward payload, and all non-matching records will be removed.
Step 6 Click the create button to confirm your settings.
Step 7 The filter is added to the filter shape - you can now add more filters if needed:
When defining a filter, you can choose from the following types:
When adding filters for a string-type field, there may be times when standard operators can't achieve what you need. For example, when defining multiple filter conditions, the default condition is AND - so ALL specified filters must be met for a match. But what if you need an OR condition?
For more complex filtering requirements, regex can be used.
This information applies wherever filters are available, not just the filter shape.
Let's take the following payload:
Suppose we want to retrieve any items where the value of the fruit
field contains peaches
OR apples
. We might be tempted to add two string-type filters in the filter shape:
However, this wouldn't return any matches because we'd be looking for any records where peaches
AND apples
are present. Instead, we can define one string-type filter with regex:
The connector shape is used to define which connector should be used for sending or receiving data, and then which endpoint.
All connectors have associated endpoints which determine what entity (orders, products, customers, etc.) is being targeted.
Any connector instances that have been for your are available to associate with a connector shape. Any endpoints configured for the underlying connector will be available for selection once you've confirmed which instance you're using.
If you need more information about the relationship between connectors and instances, please see our page.
When you add a connector shape to a process flow, the is displayed immediately, so you can choose which of your connector instances to use, and which endpoint.
To view/update the settings for an existing connector shape, click the associated 'cog' icon to access the - for example:
Follow the steps below to configure a connector shape.
Step 1 Click the select a source integration field and choose the instance that you want to use - for example:
Step 2 Select the endpoint that you want to use - for example
All endpoints associated with the parent connector for this instance are available for selection.
Step 3 Depending on how your selected endpoint is configured, you may be required to provide values for one or more variables.
Step 4 Save your changes.
Step 5 Once your selected instance and endpoint settings are saved, go back to edit settings:
Now you can access any optional filter options that are available - for example:
Available filters and variables - and whether or not they are mandatory - will vary, depending on how the connector is configured.
Step 6 The request timeout setting allows you to override the default number of seconds allowed before a request to this endpoint is deemed to have failed - for example:
Step 7 Set error-handling options as required. Available options are summarised below:
Step 8 Set the payload wrapping option as appropriate for the data received from the previous step:
This setting determines how the payload that gets pushed should be handled. Available options are summarised below:
Step 9 If required you can set response handling options:
These options are summarised below:
Step 10 Save your changes.
You can use the manual payload shape to define a static payload to be used for onward processing. For example, you might define an email template that gets pushed into an email connection, or you might want to test a process flow for a connector that's currently being by your development team.
The maximum number of characters for a single payload is 100k. Anything larger than this may cause the process flow to fail.
Any text-based data format is supported (JSON, XML, CSV, plain)) however, keep in mind that subsequent shapes in the flow may only support JSON and XML.
To view/update the settings for an existing manual payload shape, click the associated 'cog' icon:
To configure a manual payload shape, all you need to do is paste the required payload and save the shape for example:
A manual payload shape can only be saved if a payload is present. If you add a manual payload shape but don't have the required payload immediately to hand, you can just enter {}
and save.
The Patchworks SFTP connector is used to work with data via files on SFTP servers in process flows. You might work purely in the SFTP environment (for example, copying/moving files between locations), or you might sync data from SFTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an SFTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different SFTP server location
This guide explains the basics of configuring a connection shape with an SFTP connector.
Guidance on this page is for SFTP connections however, they also apply for FTP.
When you the Patchworks SFTP connector from the and then , you'll find that two authentication methods are available:
Auth method | Summary |
---|
When you add a connection shape and select an SFTP connector, you will see that two endpoints are available:
Here:
SFTP GET UserPass
is used to retrieve files from the given server (i.e. to receive data)
SFTP PUT UserPass
is used to add/update files on the given server (i.e. to send data)
You may notice that the PUT UserPass
endpoint has a GET
HTTP method - that's because it's not actually used for SFTP. All we're actually doing here is retrieving host information from the connector instance - you'll set the FTP action later in the endpoint configuration, via an ftp command
settings.
Having selected either of the two SFTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
These fields are summarised below:
In this scenario, we can't know the literal name of the file(s) that the SFTP PUT UserPass
endpoint will receive. So, by setting the path
field to {{original_filename}}
, we can refer back to the filename(s) from the previous SFTP connection step.
The {{original_path}}
variable is used to replicate the path from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
The {{current_path}}
variable is used to reference the filename within the current SFTP connection step.
For example, you might want to move existing files to a different SFTP folder. The rename
FTP command is an efficient way to do this - for example:
Here, we're using the FTP rename
command to effectively move files - we're renaming with a different folder location, with current filenames:
rename:store1/completed_orders/{{current_filename}}
The following four lines of code should be added to your script:
Our example is PHP - you should change as needed for your preferred language.
The path in your SFTP connection shape should be set to:
Much of the information above focuses on scenarios where you are working with files between different SFTP locations. However, another approach is to take the data in files from an SFTP server and sync that data into another Patchworks connector.
When a process flow includes a source connection for an SFTP server (using the SFTP GET UserPass
endpoint) and a non-SFTP target connector (for example, Shopify), data in the retrieved file(s) is used as the incoming payload for the target connector.
If multiple files are retrieved from the SFTP server (because the required path in settings for the SFTP connector is defined as a regular expression which matches more than one file), then each matched file is put through subsequent steps in the process flow one at a time, in turn. So, if you retrieve five files from the source SFTP connection, the process flow will run five times.
For information about working with regular expressions, please see the link below:
If you're building multiple process flows with similar requirements for field mappings, you can the configuration for a map shape, and then that configuration into another map shape.
When a map shape configuration is exported, a JSON file is generated and saved (automatically) to the default download folder for your browser. All and associated are exported. You can then import this file to any other map shape within:
The same process flow
Other process flows for your company profile
Other process flows for any of your linked company profiles
To export the configuration for a map shape, follow the steps below:
Step 1 Access the required process flow, then click the settings icon for the map shape that you want to export:
Step 2 Click the export map button:
Step 3
The configuration is exported and saved to your default downloads folder. The filename is always map.json
.
To import a mapping configuration into a map shape, follow the steps below:
Step 1 Access the required process flow, then click the settings icon for the map shape that you want to update:
You can import a mapping configuration into a new map shape, or into an existing one. If you import a configuration into an existing map shape, any existing mappings will be overwritten.
Step 2 Click the import map button:
Step 3 Navigate to the downloaded map configuration file on your local drive, then select it.
The default filename for exported map configuration files is map.json
.
Step 4 Having selected a valid mapping configuration file to import, the import completes immediately.
This page provides guidance on using the to configure field mappings between two .
Step 1 Click the source endpoint option:
...source and target selection fields are displayed:
Step 2 Use source and target selection fields to choose the required connector instance and associated endpoints to be mapped - for example:
Step 3 Click the generate automatic mapping button:
...when prompted to confirm this operation, click generate mapping:
As we're configuring a new map shape, there's no danger that we would overwrite existing mappings. However, always use this option with caution if you're working with an existing map shape - any existing mapping rules are overwritten when you choose to generate automatic mappings.
If you need to access the generate automatic option for an existing mapping shape, you need to click into the source and target details first.
Step 4 Patchworks attempts to apply mappings between your given source and target automatically. A mapping rule is added for each source data field and, where possible, a matched target field - for example:
From here you can refine mappings as needed. You can:
Step 5
Toggle wrap input payload
and wrap output payload
options ON/OFF as required.
Where:
wrap input payload
ON. Wraps the incoming payload in an array [ ]
ONLY for processing within the map shape.
wrap output payload
ON. Wraps the outgoing payload in an array [ ]
ONLY for onward processing.
Click options below for payload examples showing how these options work in practice:
Step 6 Save changes.
You can add as many new mapping rules as required to map data between source and target connections.
There may be times where you don't want to (or can't) use the payload fields dropdown select a field from your source/target data schema. In this case, you simply select the manual input field and enter the full schema path for the required field.
You can change the display name and/or the field associated with the source or target for any mapping rule.
If required, you can map a source field to multiple target fields - for example, you might need to send a customer order number into two (or more) target fields.
Sometimes it can be useful to map multiple source fields to a single target field. For example, you might have a target connection which expects a single field for 'full name', but a source connection with one field for 'first name' and another field for 'surname'.
When you choose to delete a mapping rule, it's removed from the list immediately. However, the deletion is not permanent until you choose to save the mapping shape.
Field transformations can be defined to change the value of a data field pulled from a source system before it is sent to its target. A transformation is comprised of one or more .
This page explains how to for a field mapping, and .
For a summary of available transform functions please see the section.
For information about adding a field transformation using a cross-reference lookup table, please see our section.
To add a new transformation for a field mapping, you start by adding a new transformation and then build the required functions. To do this, follow the steps below:
Step 1 Access the required process flow and then edit the map shape to be updated with a transform:
Step 2 Find the mapping that you want to update, then click the transform icon (between source and target elements). For our example, we're going to add a prefix to the 'id' field:
Step 3 Click the add transform button:
Step 4 Use the select a function field to choose the type of function that you need to use (functions are organised by type):
Step 5 Depending on the type of function you select, additional fields are displayed for you to complete. Update these as required - for our example we're adding some text to be added as a prefix:
Step 6 Now we need to confirm which source field this transform field should be applied for - click the add field button:
Step 7 Select the required field:
In straightforward scenarios, this will typically be the same source field as defined for the mapping row. However, more complex scenarios may prompt multiple options here - for example, if you apply multiple transforms to the same mapping.
Step 8 Accept your changes.
Step 9 Add more fields if necessary.
Step 10 When you're satisfied that all required fields have been added, accept changes and then save the shape settings.
The array join transform function is used to join elements of an array as a string, with a user-defined delimiter. For example, you might have product data in an array:
...and need to convert these items to a string before pushing to a single destination field (with each item in the string delimited by a character of your choice):
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select join from the array category:
Step 5 In the delimiter field, enter the character that you want to use to delimit each array field in the string:
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom dynamic date transform function is used to set a target field to the current date and time, based on the date and time that the process flow runs. You can also define rounding and adjustments.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom dynamic date:
Step 5 Optionally, you can add adjustment settings - for example:
These options are summarised below:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom dynamic date will be mapped to the given target field.
All process flow run jobs are added to and, by default, are picked for processing when a slot becomes available - i.e. they have a regular
priority.
If your process includes a , note that any 'sub flows' do NOT inherit the queue priority from the 'calling' flow. You should set the priority for these as required.
Use this option to .
With the introduction of for process flow runs, all scheduled process flows are added to a queue when they are initialised. This means that the time a flow is initialised is not the same as the time the flow actually runs - sometimes the run will be almost instant, but in busier periods there may be some minutes between starting and running a flow.
Toggle this option on
if you want to remove payloads that would otherwise cause this flow to fail. This is useful if multiple payloads pass through a process flow (for example, if received data is paginated or batched via a shape) and one or more of these includes a data issue.
With this option switched on, the failed payload is removed and the flow continues. If the process flow completes successfully, its is set to partial success
.
Failed payloads can be viewed in , via the . These payloads can be downloaded from :
View and update associated with this process flow. You can remove an existing label, apply labels from the dropdown list, or create a new label.
Use the dropdown list to select a notification group to receive an .
Use this option to .
Define and then reference these values throughout the entire process flow.
All existing versions of a process flow are displayed. From here you can select any version to view the flow at that point in time. You can also choose to , and to .
Value | Notes |
---|---|
In , the map shape is used to define how data fields pulled from one connector correlate with data fields in another connector, and whether any data are required to achieve this.
If your organisation has in-house development expertise and complex transformation requirements, you can use our feature to code your own scripts for use with field mappings.
For more information about working with these settings, please see our page.
All Patchworks prebuilt connectors (i.e. connectors installed from the Patchworks marketplace) adopt a for tagging common fields found in data schemas for a range of entity types (customers, orders, refunds, products, fulfillments, etc.). So, if your process flow includes connections to sync data between two prebuilt connectors, it's highly likely that auto-generating mappings will complete a lot of the work for you.
Where matching tags are found, the mapping rows will include both source and target fields (you can adjust these manually and/or , as needed).
Any fields found in the source data which could not be matched by tag are displayed in partial mapping rows, ready for you to .
For more information about using the generate automatic mapping feature, please see our page.
If your process flow includes custom connections (i.e. connectors that have been built by your organisation, using the ), you can still use the generate automatic mapping option. The success of this will depend on whether was applied to your connector during the build:
If no, Patchworks won't be able to match any source fields to a target automatically - partial mapping rows are added for all source fields found, ready for you to .
We recommend that you always try the first and then manually add any extra rows if needed. However, there's no reason that you couldn't add all of your mappings manually if preferred.
For more information about adding mappings manually, please see our page.
This opens the - for example:
The manual data path field supports .
Presentation of the value field is dependant upon your selected . For example, if the type field is set to specific date, you can pick a date for the value:
When defining a value, you can include , , and variables.
Don't forget that when a payload is running you can - this is a great way to check that your filter is refining data as expected.
Type | Expected value |
---|
All connector configured for your company are available for selection. Connectors and their associated instances are added via the .
The default setting is taken from the underlying connector endpoint setup and should only be changed if you have a technical reason for doing so, or if you receive a .
Option | Summary |
---|
Option | Summary |
---|
Option | Summary | Endpoint method |
---|
This opens the - for example:
Further information on these authentication methods can be found on our page.
Option | Summary |
---|
If you're processing files between SFTP server locations, the {{original_filename}}
variable is used to reference filenames from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
This handles cases where you're taking action with files/data processed by a previous which is configured to use the SFTP GET UserPass
endpoint and retrieve files matching a regular expression path
.
This handles cases where you're taking action with files/data processed by a previous which is configured to use the SFTP GET UserPass
endpoint to retrieve files matching a regular expression path
and you want to replicate the source path in the target location.
A fairly common requirement is to create folders on an SFTP server which are named according to the current date. This can be achieved using a , as summarised below.
The data
object in the contains three items: payload
, meta
, and variables
.
Our creates a timestamp, puts it in to the meta
, and then puts the meta
into the data
.
The SFTP shape always checks if there is an original_filename
key in the meta
and if one exists, this is used.
Any instances defined for your company profile are available to select as the source or target. If you aren't using a connector to retrieve data (for example, you are sending in data via the Inbound API or a webhook), you won't select a source endpoint - instead, use the override source format dropdown field to select the format of your incoming data:
If you've used the option to generate an initial set of mappings, you may find that some source fields could not be auto-mapped. In these cases, a mapping rule is added for each un-mapped source field, so you can either add the required destination or .
In this case, you would define mappings for the required source and target fields, then to the two source fields.
Transform | Description |
---|
Transform | Description |
---|
Transform | Description / Notes |
---|
Field | Summary | Example |
---|
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a pending
folder:
The regular expression is explained below:
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
Our aim is to copy files retrieved from an FTP location in the first connection step, to a second FTP location, using the same folder structure as the source.
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a store1
folder:
The path is added as a regular expression, explained below:
0
Continue
1
Fail the connector step and retry. The connector step is marked as failed and the queue will attempt it again.
2
Fail the process flow and queue it to retry. The process flow is marked as failed and queued for a retry.
3
Fail the process flow and do not retry.
Draft The process flow is being built.
A new process flow is added
A deployed version is copied to draft
An inactive version is copied to draft
Yes
Deploy
Deployed The process flow is currently in use, or ready for use.
A draft version is deployed
An inactive version is deployed
No
Copy to draft
Inactive The process flow was previously deployed but superseded by a later deployment.
A draft process flow is deployed
An inactive process flow is deployed
No
Copy to draft
Deploy
Retries | Sets the number of retries that will be attempted if a connection can't be made. You can define a value between |
Backoff | If you're experiencing connection issues due to rate limiting, it can be useful to increase the You can define a value between |
Allow unsuccessful statuses | If you want the process flow to continue even if the connection response is unsuccessful, toggle this option |
Round |
| Suppose that the process flow runs at the following date/time: 2023-08-10 10:30:0
If set to |
Units | If you want to adjust the date/time, select the required unit - choose from | Suppose that the process flow runs at the following date/time: 2023-08-10 10:30:0 and you want to adjust it by 1 day.
In this case, you would select |
Adjustment | Having selected an adjustment unit, enter the required number of that unit here. | See |
The cache lookup transform function is used to lookup and use data from a cache created earlier in the flow.
If caches have been added to your current process flow or company-level caches have been added for use in any other process flows, you can reference these in field mapping transformations.
For details please see Referencing a cache in mapping transformations in our cache section.
String | A text string - for example : |
String length |
Number | A number - for example: |
Specific date | A day, month and year, selected from a date picker. |
Dynamic date | Specify a date/time which is relative to a +/- number of units (seconds, minutes, hours, days, months, years). For example: |
Boolean |
Null comparison | A field is |
Variable | Designed specifically for cases where you are comparing a variable value as the filter comparison. When selected, a |
This will return the correct payload - i.e. any records where peaches AND apples are present:
Raw | Push the payload exactly as it is pulled - no modifications are made. |
First | This setting handles cases where your destination system won't process array objects, but your source system sends everything (even single records) as an array. So, When multiple records are pulled, they are written to the payload as an array. If you then strip out a single record to be pushed, that single record will - typically - still be wrapped in an array. Most systems will not accept single records as an array, so we need to 'unwrap' our customer record before it gets pushed. |
Wrapped | This setting handles cases where your destination system is expecting a payload to be wrapped in an array, but your payload contains a series of 'unwrapped' objects. The most likely scenario for this is where you have a complex process flow which is assembling a payload from different routes. Setting payload wrapping to wrapped will wrap the entire payload as an array object. So, |
Save response AS payload | Set this option to | POST PUT PATCH DELETE |
Save response IN payload | Set this option to | GET POST PUT PATCH DELETE |
Expect an empty response | Set this option to | POST GET |
FTP command | A valid FTP command is expected at the start of this field (e.g. get, put, rename, etc.). If required, qualifying path/filename information can follow a given command. For example:
|
Root | This field is only needed if you are specifying a regular expression in the subsequent |
Path | If the name of the file that you want to target is static and known, enter the full path to it here - for example:
If the name is variable and therefore unknown, you can specify a regular expression as the |
Original filename |
Original path |
Join elements of an array as a string, based on a defined delimiter. |
Apply the date that a process flow runs, with or without adjustments. |
Apply a static date. |
Convert a date to a predefined or custom format. |
Round a date up/down to the start/end of the day. |
Time now | Returns the current date and time in your required format. |
Timezone | Convert dates to a selected timezone. |
Cast to boolean | Convert a number value to a boolean (true/false) value based on PHP logic. |
Change the source field data type from |
Ceiling | Round up to the nearest whole number. |
Apply a static number. |
Floor | Round down to the nearest whole number. |
Make negative | Convert number to a negative. |
Make positive | Convert number to a positive. |
Perform a mathematical operation for selected fields. |
Change the number of decimal places. |
Define override values for conversion to true/false. |
Change the source field data type from |
Reference a value from cached data. |
Convert weight | Convert a specified weight unit to a given alternative. |
Apply a true or false value. |
Fallback | Set a default value to be used if the given input is empty. Blank values are supported. |
Map |
Convert a null value to an empty string. |
Convert a null value to zero (0). |
Convert a source value to null. |
Change the source field data type from |
Change the source field data type from |
Join selected fields with a selected character. |
Specify a comma separated list of field values to be matched for inclusion. |
Convert string to boolean | Convert a string value to a boolean (true/false) value based on PHP logic. |
Country code | Apply country codes of a selected type (Alpha 1, Alpha 2, Numeric). Note that this transform will cause the map step to fail if an empty value is received. |
Country name | Return the country name for a country code. Note that this transform will cause the map step to fail if an empty value is received. |
Apply static text or reference variables. |
Specify a comma separated list of field values to be matched for exclusion. |
Get the first word from a string. |
Hash | Convert a string to a SHA1 Hash. |
Encode data into JSON format. Note that although this is listed as a |
Get the last word from a string. |
Limit | Truncates a string to a given length. |
Lowercase | Convert to lowercase. |
Pad an existing string of characters to a given length, using a given character. |
Prefix | Add a string to the beginning of a field. |
Replace any given character(s) with another character. |
Split elements of a string into an array. |
Substring | Return a given number of characters from a given start. |
Suffix | Add a string to the end of a field. |
Trim whitespace | Remove any characters around a string. |
Uppercase | Convert to uppercase. |
URL decode | Convert an encoded URL into a readable format. |
URL encode | Convert a string to a URL encoded string. |
The cast to boolean transform function is used to convert given values to true
/false
based on PHP logic, but with the option to define overrides for specific input values. For example, consider the following payload:
Suppose we want to define a mapping for the fruit_included
item (line 2) but convert the numeric value to a true
/false
(boolean) value. Without an override setting, transforming the number to a boolean value would result in the following payload:
This is because standard PHP logic determines that 0
equates to a boolean false
value. But in this specific case, a quirk in our source data is such that the 0
value actually means true
- hence a list of fruits follows.
So, we need a way to specify an override for this field, where 0
equates to true
. We can do this via the other > cast to boolean transform function, thereby achieving the desired result:
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to boolean from the other category:
Step 5
In the true values (override)
and/or false values (override)
fields, enter specific values that should override standard PHP logic:
Multiple override values can be entered - use a comma to separate each one.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The math transform function is used to perform a mathematical operation for selected fields. For example, your incoming payload might include customer records, each with a series of numeric value
fields that need to be added together so the total can be pushed to a total
field in the target system.
The following mathematical operations are available:
Add
Subtract
Multiply
Divide
In the instructions below, we'll step through the scenario mentioned above where our incoming payload includes customer records, each with three value fields (value1
, value2
, value3
) that must be added together and pushed to a total
field in the target system.
The steps required are detailed in two stages:
To begin, we need to update/add the required mapping row so that it includes all source fields that need to be added together and then pushed to the target.
Step 1 In your process flow, access settings for your map shape:
Step 2
Find (or add) the mapping row which requires a math transformation. In the example below, we have a row that's currently set to map the source first name
field into the destination full name
field:
Step 3 On the source side of the mapping row, we need to include all the fields to be used in our mathematical operation. To do this, click the 'pencil' icon associated with the existing source field:
Step 4 Details for the selected field are shown - click the add source field button:
Step 5 Click the 'pencil' icon associated with the new source field:
Step 6 Move down and update the display name and payload fields for the second source field that you want to use - for example:
In our example, our source data is coming in via a manual payload so are defining the payload field manually - if you're using a connection shape to receive data, you'll be able to select the required field from the associated schema for your connection.
Step 7 Accept these changes to exit back to your mapping rows - notice that there are now two source fields associated with the row you updated:
Step 8 Repeat steps 3 to 7 to add any more source fields that you need to include in the mathematical operation.
Step 9 Go to stage 2.
With all required source fields defined for our mapping row, we can add a math transform function to define the required calculation based on these fields.
Step 1 Select the add transform button for the required mapping rule - for example:
Step 2 Click the add transform button:
Step 3 Click in the name field and select math from the number section in the list of transform functions:
...math options are displayed:
Step 4 Click in the operator field and select the type of calculation to be performed - you can choose from add, subtract, multiply and divide:
Step 5 Click the add field button:
Step 6 Click in source fields and select the first source field to be used in the calculation:
All source fields that were added for this mapping in stage 1 will be available for selection here.
Step 7 Accept your changes.
Step 8 Click the add field button again:
...and add the next source field to be used - for example:
Step 9 Accept your changes.
Step 10 Repeat steps 8 and 9 to include any more source fields to be used in the calculation. Each time you accept a new source field you'll see the sequence that they will be processed when this transform function runs - for example:
Fields are processed in the sequence that they are added here.
Step 11 Having added all required source fields to be calculated, accept changes:
...then save the function:
Step 12 Ensure that the target field for this mapping row is set as required, then save the map shape. Next time the process flow runs, the mathematical operation will be performed for the given source fields and the total value is pushed to the defined target field. The example below shows an incoming payload before and after the math transformation is applied:
User pass | The instance is authenticated by providing a username and password for the SFTP server. |
Key pass | The instance is authenticated by providing a private key (RSA |
The cast to string transform function is used to change the data type associated with a source field from number
to string
. For example, you might have an id
field in a source system that's stored as string
value, but your destination system expects the id
to be a number
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to string from the number category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The format transform function is used to change a date value to a different format. For example:
...might be changed to:
A range of predefined date formats is available for selection, or you can set your own custom format.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select format from the date category:
Step 5 Click in the format field to select a predefined date format that incoming dates should be converted to:
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
Internally, the format transform function uses Laravel's date format methods, which in turn call PHP date format methods. Commonly used format specifiers are listed below - full details are available in this Laravel guide.
The following characters are commonly used to specify days in custom format dates.
The following characters are commonly used to specify months in custom format dates.
The following characters are commonly used to specify years in custom format dates.
The following characters are commonly used to specify times in custom format dates.
Unix Epoch dates must be received as a number, not a string - i.e.:
1701734400
rather than "1701734400"
If your Unix dates are provided as strings, you should convert these to numbers. To achieve this, add a cast to number transform for the date field BEFORE the date format transform function.
The round date transform function is used to round source dates to either the start or end of the day, where:
start of day
changes the time to 00:00
for the received date
end of day
changes the time to 23:59
for the received date
So, you can round a given source date before sending the rounded value into a given target field.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select round date:
Step 5 Choose your required rounding:
Step 6 Accept your changes and save the transformation - at this point your mapping row is displayed without a target. From here, you can go ahead and add a target field:
The custom static date transform function is used to set a target field to a given date and time.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom static date:
Step 5 Click anywhere in the date field, or click the calendar icon, to open a date picker:
Step 6 Set the required date and time.
Step 7 Accept your changes:
...then save the transformation:
Step 8 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 9 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom static date will be mapped to the given target field.
The custom number transform function is used to map a given number to a target field.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when a custom number transformation is used we don't select a source field - the custom number transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom number:
Step 5 Move down to the custom number field and enter your required number - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom number will be mapped to the given target field.
The cast boolean to string transform function is used to change the data type associated with a source field from boolean
to string
.
A boolean
data type can have only two possible states: true
or false
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast boolean to string from the other category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The round number transform function is used to change the number of decimal places for a number value. For example:
...might be changed to:
With the round number transform you can specify the number of decimal places that should be applied to incoming numeric values.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select round number from the number category:
Step 5 Move to the decimal places field and enter the required number of decimal places required for transformed values - for example:
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The null to string transform function is used to convert incoming null
values to an empty string. For example:
...is converted to:
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select null to string from the other category:
Step 5 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom boolean transform function is used to map a value of true
or false
to a target field.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when a custom boolean transformation is used we don't select a source field - the custom boolean transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom boolean:
Step 5 Move down to the value field and select your required true/false value - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the selected custom boolean value will be mapped to the given target field.
The script transform function is used to apply an existing to the source value, and the updated field value is pushed to the target field.
Make sure that the required script has been before applying it as a transform function.
Any payloads passed in and out of a script transform are verbatim - there is no JSON encoding.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select script:
Step 5 Click in the script version field and select the script/version that you want to apply for this field transformation - for example:
All scripts and versions are available for selection. If you choose a script/version that is not currently deployed, it will be deployed automatically.
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select/update the target field and then the mapping in the usual way.
The null value transform function is used to replace the value of a source field with null.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select null value from the other category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The concatenate transform function is used to join the values for two or more source fields (using a given joining character) and then map the output of this transformation to a destination field.
For example, you might have a source system that captures the first name
and last name
for customer records, and then a destination system that expects this information in a single name
field.
In the instructions below, we'll step through the scenario mentioned above where our incoming payload includes the first name
and last name
for customer records, but our destination system expects this information in a single full_name
field. The steps required are detailed in two stages:
To begin, we need to update/add the required mapping row so that it includes all source fields that need to be joined and then pushed to the specified destination.
Step 1 In your process flow, access settings for your map shape:
Step 2
Find (or add) the mapping row which requires a concatenate transformation. In the example below, we have a row that's currently set to map the source first name
field into the destination full name
field:
Step 3 On the source side of the mapping row, we need to add all the fields that need to be joined. To do this, click the 'pencil' icon associated with the existing source field:
Step 4 Details for the selected field are shown - click the add source field button:
Step 5 Click the 'pencil' icon associated with the new source field:
Step 6 Move down and update the display name and payload fields for the second source field that you want to join - for example:
Step 7 Accept these changes to exit back to your mapping rows - notice that there are now two source fields associated with the row you updated:
Step 8 Repeat steps 3 to 7 to add any more source fields that you need to join.
With all required source fields defined for our mapping row, we can add a concatenate transform function to join the values for these fields.
Step 1 Select the add transform button for the required mapping rule - for example:
Step 2 Click the add transform button:
Step 3 Click in the name field and select concatenate from the string section in the list of transform functions:
...concatenate options are displayed:
Step 4 In the join character field, enter the character that you want to join each of your source fields - for example, a hyphen or a space:
Step 5 Click the add field button:
Step 6 Click in source fields and select the first source field to be joined:
Step 7 Accept your changes.
Step 8 Click the add field button again:
...and add the next source field to be joined - for example:
Step 9 Accept your changes.
Step 10 Repeat steps 8 and 9 to add any more source fields to be joined. Each time you accept a new source field you'll see the sequence that they will be processed when this transform function runs - for example:
Fields are joined in the sequence that they are added here.
Step 11 Having added all required source fields to be joined, accept changes:
...then save the function:
Step 12 Ensure that the target field for this mapping row is set as required, then save the map shape. Next time the process flow runs, the given source fields for this mapping row will be joined and then that value is pushed to the target. The example below shows an incoming payload before and after the concatenate transformation is applied:
The first word transform function is used to extract the first word of an incoming string value, based on a user-defined delimiter. For example, you might have product data in a string:
...and need to extract just the last item in the string for pushing to the destination system:
In this case, items in our source string value are delimited with a comma, so we can use this to determine the first word. The transform checks incoming string values and determines the 'first word' to be the word after the first occurrence of the given delimiter.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select first word from the string category:
Step 5 In the delimiter field, enter the character that delimits elements in the string:
If you use any of the following characters, they should be escaped:
.
+
*
?
^
$
(
)
[
]
{
}
|
\
/
For example, a delimiter of *
would be entered as:
\*
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice):
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The cast to float transform function is used to change the data type associated with a source field from string
to float
.
A float is a type of number which uses a floating point to represent a decimal or fractional value. They are typically used for very large or very small values where there are a lot of numbers after the point - for example: 5.3333 or 0.0001.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to float from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The cast to number transform function is used to change the data type associated with a source field from string
to number
. For example, you might have an id
field in a source system that's stored as number
value, but your destination system expects the id
to be a string
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to number from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom string transform function is used to map a given string to a target field. This string can be static, or you can reference and .
If you've added/updated a before, you'll be used to selecting a source field and a target field. However, when a custom string transformation is used we don't select a source field - the custom string transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom string:
Step 5 Move down to the custom string field and enter your required text or variables - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom string (or associated values from variables) will be mapped to the given target field.
The null to string transform function is used to convert incoming null
values to an empty string. For example:
...is converted to:
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select null to zero from the other category:
Step 5 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The JSON encode transform function is encode incoming values as JSON. For example, you might have product data in a string:
...and need to encode the values for pushing to the destination system:
Although this is listed as a string type transform, in fact any data type can be encoded.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select JSON encode from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes:
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The replace transform function is used to replace an existing source string value with either:
An alternative string value
An empty value
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select replace from the string category:
Step 5 Update search and replace fields with your required values:
For the replace field, you can enter another string or leave the field blank to replace the source with an empty value.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
Note that when a string
type is selected, you can choose the regex operator and define a regex value. This provides greater flexibility if you can't achieve the desired results using standard operators. preg_match
is used for pattern matching. For an example, see .
You can also match a string
-type field according to whether or not the value is one of many items in a given list. For more information, see .
A number which represents the expected string length for the received payload. Here, the 'payload' might refer to a targeted field within the incoming payload, or the entire payload.
For example, if you want to ensure that an objectId
field is never empty, you would define a filter for objectId
greater than
0
:
In this case, toggling the keep matching
option to OFF means that the ongoing payload will include only items where this field is not empty. Conversely, toggle this option on if you want to pass on empty payloads for any reason.
You can use the same principle to check for empty payloads (as opposed to a specific field). In this case you would define a filter for *
greater than
0
:
It's important to be aware that relative date/time variables are affected by the .
A true or false value. For example, if you only want to consider items where an itemRef
field is set to true
, you would define a filter for itemRef
equals
true
:
Generally, if your process flow is pulling from a source connection but later pushing just a into a destination connection, you should set payload wrapping to first.
This option provides the ability to access the response body via . This can be useful for cases where an API returns a successful response despite an error - by inspecting response information from the payload itself, you can determine whether or not a request is successful.
By default, the response is saved in a field named response
- for example:
However, when the save response in payload
option is toggled on
, you can specify your preferred field name - for example:
When specifying a path to a given folder in this way, you don't need a /
at the start or at the end.
This field is not currently used. For information about working with original filenames please see the section below.
This field is not currently used. For information about working with original paths please see the section below.
Convert values using a .
Apply a field-level . Note that a script will time out if it runs for more than 120 seconds.
Specifier | Summary |
---|---|
Specifier | Summary |
---|---|
Specifier | Summary |
---|---|
Specifier | Summary |
---|---|
In our example, our source data is coming in via a manual payload so are defining the payload field manually - if you're using a to receive data, you'll be able to select the required field from the associated schema for your connection.
Step 9 Go to .
All source fields that were added for this mapping in will be available for selection here.
For more information about referencing flow variables in a custom string, please see our page. For more information about referencing cached data in a custom string, please see our page.
d
Day of the month with leading zeros (01 to 31).
j
Day of the month without leading zeros (1 to 31).
D
A textual representation of a day in three letters (Mon to Sun)
l
A full textual representation of the day of the week (Monday to Sunday)
m
Numeric representation of a month with leading zeros (01 to 12).
n
Numeric representation of a month without leading zeros (1 to 12).
M
A textual representation of a month in three letters (Jan to Dec)
F
A full textual representation of a month (January to December).
Y
Four-digit representation of the year (e.g. 2023).
y
Two-digit representation of the year (e.g. 23).
H
Hour in 24-hour format with leading zeros (00 to 23).
i
Minutes with leading zeros (00 to 59).
s
Seconds with leading zeros (00 to 59).
a
Lowercase Ante meridiem (am) or Post meridiem (pm).
A
Uppercase Ante meridiem (AM) or Post meridiem (PM).
You need to map an array within a payload but also include one of the 'parent' fields - for example, map the following:
...to:
To achieve this, the target field must be defined with double *.
characters in the data path. For example:
If you don't do this, and just enter the standard single *.
characters - for example:
....the output will be a flat object - for example:
Shapes detailed in this section are available for Professional and Enterprise core subscription tiers, or as a bolt-on for the Standard tier.
The run process flow shape is used to call one process flow from another, so you can run process flows in a chain. For example, you might have a process flow that receives data from a webhook, applies filters and then hits a run process flow shape to call another flow with that data.
The default behaviour is for the payload from the end of the calling process flow to be sent into the called process flow for onward processing. However, when configuring a run process flow shape you can add a manual payload - in this case, your manual payload will be sent into the called process flow.
The run process flow shape also allows you to choose whether any variables associated with the called process flow should be applied.
A called process flow will only run if it is enabled.
A called process flow is always added to your run queue for processing - even if the parent flow is triggered manually.
A called process flow does NOT inherit the queue priority of its parent - you should set the priority of these process flows individually.
If you don't configure a manual payload in the run process flow shape, the final payload from the calling process flow is always sent into the called process flow.
If multiple payloads are passed into the run process flow shape, the called process flow will run once for each payload - these runs take place in parallel.
When a payload is passed to a 'child' process flow, meta variables are included.
You cannot create a recursive process flow loop - for example, if Process Flow A calls Process Flow B, you cannot then call Process Flow A from Process Flow B.
Step 1 In your process flow, add the run process flow shape in the usual way:
Step 2 Click in the flow field and select the process flow that you want to run:
If you have a lot of process flows, you can search for the one you want to use here.
If you select a process flow that is not enabled, an error will be displayed when you attempt to save these settings. In this case, you should access the process flow you want to call and enable it, then come back to save this shape.
Step 3 Move down to the settings section and choose which version of the selected process flow to call:
Available options are summarised below:
Step 4 If your selected process flow is associated with any process variables, these are shown - you can choose to enable or disable these:
Step 5 If you want to pass a manual payload into this process flow, toggle the specify payload manually option ON and paste the required payload into the supplied payload field:
The manual payload can be any format - JSON, XML, plain text, etc.
Step 6 Save the shape. The configured shape is added to the canvas with the sub-flow available as a link - for example:
Click this link to open the sub-flow in a new browser tab.
The last word transform function is used to extract the last word of an incoming string value, based on a user-defined delimiter. For example, you might have product data in a string:
...and need to extract just the last item in the string for pushing to the destination system:
In this case, items in our source string value are delimited with a comma, so we can use this to determine the last word. The transform checks incoming string values and determines the 'last word' to be the word after the last occurrence of the given delimiter.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select last word from the string category:
Step 5 In the delimiter field, enter the character that delimits elements in the string:
If you use any of the following characters, they should be escaped:
.
+
*
?
^
$
(
)
[
]
{
}
|
\
/
For example, a delimiter of *
would be entered as:
\*
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes:
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The split string transform function is used to split elements of a string into an array, with a user-defined delimiter. For example, you might have product data in a string:
...and need to convert these items to an array before pushing to the destination system:
In this case, items in our source string value are delimited with a comma, so we can use this to determine where each split occurs. The transform checks incoming string values and determines each array item to be the word after each comma.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select split from the string category:
Step 5 In the delimiter field, enter the character that delimits elements in the string:
If you use any of the following characters, they should be escaped:
.
+
*
?
^
$
(
)
[
]
{
}
|
\
/
For example, a delimiter of *
would be entered as:
\*
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The notify shape is used to create custom notification messages for output to run logs and email messages.
To achieve this, you compose a notification template
within the shape settings using any combination of static text and variables. When the process flow runs and hits this shape, the notification message is generated from your defined template and is then:
Output to the run logs AND/OR
Emailed to recipients in selected notification groups
Notification templates can include dynamic content from variables.
Email notifications are sent irrespective of whether a process flow is enabled and deployed.
The maximum number of email notifications that can be sent (across all process flows for your company profile) is determined by your Core subscription tier. If you manage multiple linked profiles, each one of these will have its own allowance.
Notification templates can include dynamic content from payload, flow, and meta variables.
Variables return the first 100 characters of the associated content.
In the example below, we use two flow variables to retrieve a store name (store_name
) and a team name (query_team
), and a payload variable (our_id
) to retrieve required information from the payload:
Step 1 In your process flow, add the notify shape in the usual way:
Step 2 Select an alert level from the dropdown field:
The selection made here determines how this notification is displayed in logs and email messages:
Email notifications always includealert level
as a status
, which can be useful if you want to define mailbox filters based on the alert level. An example message is shown below:
Step 3 Choose notification channel(s) to be used - i.e. how these notifications should be communicated:
Available options are summarised below:
Step 4
This field is not displayed if the channel is set to log in the previous step.
The email limit determines the maximum number of emails that the notify shape can send per flow run.:
For example, if you select a notification group that contains 20 recipients and set the email limit to 10, the first 10 recipients will receive emails and the remaining 10 in the group will NOT receive emails.
Step 5
This field is not displayed if the channel is set to log in the previous step.
If you want to send this notification to email recipients, select the required notification group:
All defined notification groups are available for selection. If you need to add a new group or check recipients in an existing group, please check our Notification groups page.
Remember that the email limit defined in the previous step may limit the number of recipients to receive emails. It's a good idea to check how many recipients are in your selected notification group, and that your email limit is set appropriately.
Step 6 Add your required notification text/variables to the notification template section:
Remember that a notification message can include static text and (using variables) dynamic content.
If you need more space, you can drag this field further down using the handlebar in the bottom right corner:
Step 7 Save the shape.
The pad transform function is used to pad an existing string of characters to a given length, using a given character. You can apply padding to the left (i.e. immediately before the existing string), to the right (i.e. immediately after the existing string), or both (immediately before and after, as equally as possible).
The payload item below contains a string that's 8 characters long:
If we apply padding to a length of 20
using a *
character to the right
, the result would be:
Here, we have an extra 12 * characters to the right, giving a string length of 20. However, if we apply the *
character to both
, the result would be:
Now the padding is applied with 6 characters to the left of the original string and 6 characters to the right.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select pad from the string section:
Step 5
Click in the direction
field and select where you would like padding to be applied:
You can apply padding to the left
(i.e. immediately before the existing string), to the right
(i.e. immediately after the existing string), or both
(immediately before and after, as equally as possible).
Step 6
In the length
field, specify the number of characters that you'd like the final (i.e. transformed) string to be - for example:
Step 7
In the pad character
field, specify the character that you'd like to use for padding - for example:
If you want padding to be applied with spaces, press the space bar once in this field.
Step 8 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform.
The route shape is used for cases where a process flow needs to split into multiple paths, based on a given set of conditions. Conditions are defined based on any fields found in the schema associated with your source data, so the scope for using routes is huge.
To define multiple routes for your process flow you must:
Add a route shape.
Configure the route shape to add required routes and conditions.
Build the flow for each configured route by add shapes in the usual way.
By default, multiple routes are processed in parallel when a process flow runs.
If a route shape is added without route conditions, incoming data flows down ALL defined routes.
When you add a route shape to a process flow, the shape is added to your canvas with two placeholder route stems - for example:
To configure these routes (and add more if needed) click the 'cog' icon associated with this shape to access route settings.
Follow the steps below to configure route data for a route shape.
Step 1 Select a source integration and endpoint to determine where the incoming payload for the route shape is coming from - for example:
Step 2 Select a routing method to determine what should happen if a payload record matches conditions defined for more than one route:
These options are summarised below:
Step 3 Click the 'edit' icon associated with the first route:
Step 4 Enter your required name for this route - it's a good idea to ensure this provides an indication of the route's purpose. For example:
Step 5
If you intend to define multiple rules/filters for this route, select the required operator from the filter logic dropdown field:
Select AND
so all defined filters must apply for a match
Select OR
so any one of the defined filters will result in a match
If you only need to define one filter, just leave the default setting in place (it has no effect).
Step 6 Click the add new filter button:
Step 7 Filter settings are displayed:
From here, you can select a field (from the data schema associated with the source endpoint selected in step 1) - for example:
Alternatively, you can toggle the manual input option to ON and add syntax for dynamic variables:
The manual data path field supports metadata variables.
Step 8 Use remaining operator, type and value options to define the required filter.
When a string filter type is selected, you have the option to select the regex operator and then define a regex value. This provides greater flexibility if you can't achieve the desired results using standard operators. preg_match
is used for pattern matching.
Step 9 Use the keep matching? toggle option to choose how matched records should be treated:
Here:
If the keep matching? option is toggled OFF, matched records are removed from the payload before it moves on down the flow for further processing.
If the keep matching? option is toggled ON, matched records remain in the onward payload, and all non-matching records will be removed.
Step 10 Click the save button (at the bottom of the panel) to confirm these settings. The new rule is added for your first route - for example:
Step 11 Repeat steps 6 to 10 to add any additional rules for this route. When you've added all required rules, click the save/update button at the bottom of the panel.
Step 12 Repeat steps 3 to 11 to configure the second route.
Step 13 Add any additional routes required using the add route button. Each time you add a new route, the canvas updates with an additional route stem from your route shape.
Step 14 Save your changes.
Having defined all required routes and associated conditions, the route shape on the canvas will have the corresponding number of route stems, ready for you to add shapes. For example:
Click the + sign for each branch in turn, then add the required shapes for each route flow.
All process flows must begin with a trigger - it’s this that determines when and how frequently the process flow will run. For this reason, all new process flows are created with a trigger shape already in place. You should edit this shape to apply your required settings:
Having accessed settings for a trigger shape, you can select the required trigger type - this determines any subsequent options that are displayed:
Currently, schedule, webhook, and event listener trigger types are available.
The split shape is used to split-out a given element of a payload at a given data element. When you split data, the specified element (including any nested elements) is extracted for onward processing.
For example, your process flow might be pulling customer data from a source connection, but you need to send address details to a different endpoint. In this case, you'd use the route shape to create two different routes, mapping just customer data down one, and splitting out addresses for the other.
Step 1 In your process flow, add the split shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be split originates:
Step 3 Move down to the level to split section and use the dropdown data path to select the required data element to split - for example:
Remember - any data (including nested data) within the selected element will be split out into a new payload.
Step 4 If required, you can add a wrapper key. This wraps the entire payload in an element of the given name - for example:
...would wrap the payload as shown below:
Step 5 Save the shape.
The trigger webhook option can be used if you want to trigger a process flow whenever a given event occurs in your third-party application.
When you choose to add a webhook to a process flow trigger shape, a Patchworks webhook URL with built-in authentication is auto-generated. This URL must be added to your third-party application, so it knows where to send event data.
How you use webhooks is driven by your business requirements, and the capabilities of your third-party application. For example, your third-party application might send a webhook which includes a batch of orders to be processed in the body
, or the webhook body
might simply contain a notification message indicating that orders are ready for you to pull.
Patchworks webhook URLs are generated in the form:
For example:
The {{webhook_id}}
is a Patchworks signature which is generated as a random hash (that doesn't expire). This provides built-in authentication for our URLs however, they should still be kept private.
The default response for a successful webhook trigger is a status code of 200
, with the following body:
If required, you can customise this response.
Follow the steps below to add a new webhook trigger.
Step 1 Click the settings icon associated with the trigger shape in your process flow:
Step 2 Click the add new webhook button:
...a unique Patchworks webhook URL is generated:
Step 3 Copy this URL and paste it into your third-party application.
The documentation for your third-party application should guide you through any required setup for webhooks.
Step 4 If you want to customise the response for your webhook, click the edit icon associated with the URL and make required changes. For more information please see Customising your webhook response.
Step 5 Build the rest of your process flow as needed to handle incoming data from your defined webhook(s).
Step 6 Make sure that your process flow is deployed and enabled - webhooks will not be received if this isn't done.
If required, you can change the default response for your webhook by selecting the 'edit' icon associated with the URL - for example:
Here, you'll find options to select an alternative status code and specify new body text:
Here you can:
Use the status code dropdown field to select the required response code.
Enter the required text in the body field.
Select the required format for your body content (choose from JSON, XML or Plain text).
The track data shape is used to track processed data, based on field paths that you define. When data passes through a track data shape, the values associated with your defined field paths are tracked - which means they can be reviewed from the tracked data page.
For example, you might want to track all customer_id
values that pass through a flow, so at any time you can quickly check if/when/how a given customer record has been processed.
Tracked data shape is available for viewing for 15 days after it was last tracked.
Depending on data volumes, allow 10 minutes for tracked data to be visible in data pools.
The track data shape works with incoming payloads from a connection shape, a manual payload, an API request, or a webhook.
JSON payloads are supported.
You can add as many track data shapes to a process flow as required. For example, you might place one immediately after a receiving connector to track everything received before anything else happens to the data, and another after the final sending connector to track everything sent into your destination system.
To add and configure a new track shape, follow the steps below.
Step 1 In your process flow, add the track data shape in the usual way:
Step 2 Configure the settings as required - the table below summarises available fields:
Step 3 Save your settings., then access them again:
...you'll now see success criteria options at the bottom of the shape settings drawer:
Step 4 The success criteria options are optional. If you don't need to apply these then your setup is complete - close the settings drawer and continue with your process flow as required. If you do want to use these options, see below for guidance.
When data passes through a track data shape, specified data fields are tracked and by default, tracked data is marked as a success.
However, there may be times where you want to control the conditions under which the status of tracked data is deemed a success or a failure, and to record this outcome for future reference. The success criteria section allows you to:
Define filter conditions that must be met for an entity's progress to be reported as a success
or failure
in summary information for this tracked item.
Add a message to be displayed in summary information for this tracked item.
When tracked data is marked as failed, it is still tracked and the shape still processes successfully - for example:
In this run log, notice that tracked data is marked as a failure (1) but tracked data is stored (2) and the track data step succeeds (3).
In this context, tracked data marked as a 'failure' simply means that one or more filters defined for success were not matched and therefore this item is reported as a failure in associated tracked data summaries.
Any conditions that you want to apply can be added via filters. To define a new filter, click the add filter button:
These filters work in the same way as other filters in the dashboard - select/define a field, then set conditions and values.
You can add as many filters as you need - multiple filters work together with an 'AND' operator. Remember that you're defining conditions that must be met for a success
outcome - if multiple filters are present they must ALL be matched. If one or more filters are not matched, the associated tracked data is marked as a failure
.
Filters can be based on any field(s) found in your data, irrespective of whether you chosen to track them.
The success or failure outcome from these filters is reported in the logs, and also in tracked data summaries - for example:
You can define a message to be displayed in the tracked data summary for associated tracked data:
This message can be text-only, or any combination of text, payload variables, flow variables, and metadata variables. For example:
In tracked data summaries, this example is shown as:
Messages are added to tracked data summaries when:
No success criteria filters are defined
Success criteria filters are defined and the outcome is success
Success criteria filters are defined and the outcome is failure
The tracked data page is used to view summary information for tracked data:
Here, you can search for an entity value to access available tracked data summaries, and then choose to view summary details.
Tracked data summaries are available for data tracked via the track data shape, and for data tracked via connector endpoint field tags).
Data for the tracked data page is updated every 10 minutes (i.e. every 10 minutes the database is checked for new tracked items and these are pushed to the dashboard UI).
The tracked data page shows a maximum of 15 entries for any given tracked entity. So if you're tracking the same entity in more than 15 places, only the latest 15 entries are shown.
Tracked data summaries remain available for 15 days, starting from when an entity was last tracked. For example, if you're tracking customer IDs and ID 0100001
was tracked 14 days ago but then again today, it will be visible for another 15 days (and so on until it's not seen for 15 days).
Times shown on tracked data summaries are UTC.
To access the tracked data page, select process flows | tracked data from the left-hand navigation menu:
To find tracked data information for a particular field value, start by selecting the associated entity type - for example:
The entity type list includes all entity types that have been tracked (via the track data shape or via tagged endpoints).
Next, enter the value that you want to to review - for example, if you are tracking customer IDs, you'd enter the required customer ID here:
Tracked data summaries are displayed for the most recent process flow runs where this entity was tracked - for example:
As you type a value, the search updates instantly so you may notice that the list of available summaries changes as you type.
From here, you can click any summary to view details:
Each tracked data summary shows tracking information for the given entity in a flow run - for example:
You'll see one entry for each occasion that this entity was tracked in this run. In our example we have two entries because the associated process flow includes two track data shapes, and this entity passed through both.
Summary information varies slightly, depending on whether the data was tracked via a track data shape, or via tagged endpoints:
Using the try/catch shape, you can build your own path to handle process flow sync exceptions elegantly.
Place a try/catch shape before key steps in your flow, then configure its settings to determine behaviour when exceptions are found. Once this is done, the shape is added to the canvas with two routes - one for try
and one for catch
:
For the try
route, build your flow in the usual way to achieve the required result. For the catch
route, define a flow that should be followed for exceptions. For example, you might add exceptions to a cache (so they can be processed subsequently) and then notify specified contacts that exceptions have occurred.
The notify shape can be very powerful when used with the try/catch shape. Keep in mind that you can include meta, flow, and payload variables to define notification messages. For example:
When the process flow runs, data flows down the try
route and ideally completes without exceptions. However, if an exception is found, the associated payload is removed and sent along the catch
route.
You can add one try/catch shape per process flow
You can add one try/catch shape per process flow
If a connector needs to retry authentication, the retry is NOT caught (i.e. it's not sent into the catch
route). If re-authentication is successful the flow continues as normal (i.e. along the try
route), otherwise the process flow fails.
As noted above, if an exception is found it gets removed (as a failed payload
) and sent along the defined catch
route. For example, if a try/catch shape receives 20 payloads and finds 4 exceptions, then 4 failed payloads are sent along the catch
route.
Failed payloads can be found on the failed payloads
tab in run logs - for example:
To add and configure a new try/catch shape, follow the steps below.
Step 1 In your process flow, add the try/catch shape in the usual way:
You can add one try/catch shape per process flow. It's up to you where you place this in your flow, but it's generally a good idea to add it at the very start to ensure that all steps are checked.
Step 2 Access settings for the newly placed shape:
Step 3 Choose the action to take if an exception is encountered:
Available options are summarised below:
Step 4
Save settings to return to the canvas and build your try
and catch
routes as required.
Trigger schedule options are used to schedule the associated process flow to run at a specified frequency and/or time. Here, you can use intuitive selection options to define your requirements or - if you are familiar with regular expressions - use advanced options to build your own expression.
Trigger schedules are based on Coordinated Universal Time (UTC).
Schedules can be defined based on the following occurrences:
Define a schedule to run every x
minutes - for example:
Define a schedule to run every x
hours - for example:
Define a schedule to run on selected days of the week at a given start time - for example:
Use the every dropdown list for quick daily presets, or define custom settings:
Define a schedule to run on selected days of the month or weeks, for selected months, at a given start time - for example:
Use the every dropdown list for quick monthly presets, or define custom settings:
If you are familiar with cron expressions, you can select this option to activate the cron expression field beneath and enter your required expression directly:
Patchworks supports the 'standard' cron expression format, consisting of five fields: minute, hour, day of the month, month, and day of the week.
Each field is represented by a number or an asterisk (*) to indicate any value. For example:
0 5 * * *
would run at 5 am, every day of the week.
Extended cron expressions (where six characters can be used to include seconds or seven characters to include seconds and year) are not supported.
Follow the steps below to add a new trigger schedule.
Step 1 To add a new schedule, click the add new schedule button:
Step 2 Select an occurrence.
Step 3 Define your required settings for the occurrence.
Step 4 Click save to save this schedule. The schedule is added to the shape - for example:
You can add a single schedule, or multiple schedules. When you add multiple schedules, ALL of them will be active.
The set variables shape is used to set values for flow variables and/or metadata variables at any point in a flow.
When defining variable values you can use:
static text (e.g. blue-003
)
payload syntax (e.g. [[payload.productColour]]
)
flow variable syntax (e.g..{{flow.variables.productColour}}
)
Step 1 In your process flow, add the set variables shape in the usual way:
Step 2 Access shape settings:
Step 3 Click the add new variable option associated with the type of variable that you want to define - for example:
Step 4 Options are displayed for you to define the required variables and their values. How these are displayed depends on the type of variable you've chosen to add:
Step 5 Once variables are accepted they're added to the settings panel (you can edit/delete as needed):
Step 6 Save the shape.
Event connectors can trigger process flows by listening for events that are published to message queues/topics by a message broker (e.g. RabbitMQ).
Once an event connector is configured, it becomes available for use as a process flow trigger.
New event connectors are available for selection in process flow trigger shapes as soon as they are saved successfully.
ALL messages published to selected queues/topics are passed through to the process flow.
Follow the steps below to add an event connector as a process flow trigger.
Step 1 Click the settings icon associated with the trigger shape in your process flow:
Step 2 Click the add new event listener button:
...event options are displayed:
Step 3 Select a broker from the list (all configured event connectors are available for selection):
Step 4 Select a queue from the list - all configured message queues/topics for the selected broker (i.e. event connector) are available for selection:
Step 5 Save the shape settings.
The load from cache shape is used to retrieve a stored payload from an existing cache key (created from an shape).
You might configure a load from cache shape in the same process flow as the original add to cache step or - if a cache was - you might choose to load it in a different process flow.
To add a load from cache shape to a process flow, follow the steps below.
Step 1 Find the point in your process flow where you want to load the payload from a cache - this could be at the very start of a process flow, or perhaps somewhere further down.
Step 2 Select the load from cache shape from the shapes palette:
Step 3 Click in the select cache field and choose which cache you want to retrieve:
Step 4 Enter the cache key that you want to retrieve - for example:
Step 5 If you want this process flow to fail if for any reason this cache can't be retrieved, tick the fail on cache miss option:
If you leave this option un-ticked, the process flow will continue to run if the cache can't be loaded.
Step 7 Save changes. The load from cache shape is added to your process flow, displaying the given name and key - for example:
Understanding how pagination options impact what data is cached.
When you drop an into a process flow, there are two options that you should consider if your selected endpoint paginates the data that is received OR you generate multiple payloads in some other way (for example, via the shape). These options are: save all pages
and append
.
Together, these two options determine how multiple payloads are cached, so it's important to understand the implications of each.
If you are caching paginated data and choose to toggle the save all pages
option to on
, the payload for each page is saved with its page number and a unique key
. For example:
The unique key
is generated dynamically, by adding the page number to your specified cache key. If the cache is a flow run
type, the unique key
will also incorporate the flow run id
.
It's important to note that every time a connection shape pulls paginated data, page numbers reset to 1.
When the append option is toggled ON, incoming payloads are appended to cache keys. How this works depends on the save all pages option:
The diagram below illustrates this:
When an is dropped into a process flow, the entire incoming payload is cached and associated with the given cache key. Depending on the cache type, you can load this cache later in the same flow or in a different flow.
In the simplest scenario, your given cache key would be a static value (e.g. customers
) and you would use this to load the entire cache (containing perhaps tens, hundreds, even thousands of items) where required. But what if you want to load a specific item from a cache, rather than the whole thing?
This is where dynamic cache keys are so useful.
To load data from a cache, you configure a load from cache shape with the required cache
and a single cache key
. All data associated with your given cache key
is loaded.
Consider the example incoming payload below, where four records are cached with a static cache key
with a value of customers
:
If we were to configure a load from cache shape to access the customers
cache key, all four records would be loaded.
So, in order to load specific items from a cache, the incoming data must be added to a cache in such a way that we can easily target individual items. We need an efficient way to take incoming data, batch it into single-record payloads and add each of these to the cache with its own unique, identifying cache key - i.e.:
We can achieve this as follows:
When you specify a dynamic variable as the cache key, the value for that variable is injected into the key. To prevent the case where large amounts of data are passed into the key, there is a character limit is 128 characters.
These steps assume that you have already defined a flow control shape (or some other means) to ensure that the add to cache shape receives single-record payloads.
Step 2 In the add to cache shape settings, choose to create cache:
Step 3 Set the cache level and name as required and save changes.
Step 4 Select the cache that you just created - for example:
...where schema notation
should be replaced with the notation path to the first occurrence of the required element in the payload which should be used to form the cache key. If required, you can also include a static prefix or suffix. For example:
The output of the payload variable will be used as the cache key.
Step 6 Save the add to cache shape settings.
The add to cache shape is used to cache (i.e. store a copy of) the payload as it stands at that point in the process flow.
You can as many add to cache shapes as you like in a process flow. For example, you might place want to cache a payload as soon as it gets pulled from a source connection, and again later after it's been transformed. For example:
During routine platform maintenance, cached data may be cleared. While we make a best effort to retain data for up to 7 days, it could be cleared sooner. Please design your process flows accordingly.
The maximum cache size is 50MB.
Cache names must not include full stop (.) or colon (:) characters.
Cached data is stored in Amazon S3.
To add an add to cache shape to a process flow, follow the steps below.
Step 1 Find the point in your process flow where you want to cache the payload - typically this would be after a 'GET' connection shape, or perhaps after data has been mapped or manipulated via a script.
Step 2 Select the add to cache shape from the shapes palette:
Step 3 Click the create cache option:
...cache options are displayed:
Step 4 Click in the cache level > select cache field to choose when/where this cache will be available:
Choose from the following options:
Step 5 Enter a name for this cache:
The cache name must not include full stop (.) or colon (:) characters.
Step 6 If you have chosen a flow-level or company-level cache, you can set a data retention period to determine when this data will expire - for example:
The data retention period for a flow run-level cache is always 2 hours - this cannot be changed. The maximum retention period for a flow-level or company-level cache is 7 days.
Step 7 Save changes to exit back to add to cache settings where you can continue with your newly created cache.
Step 8 Click in the select a cache field and select your new cache from the list:
Step 9 Enter a cache key to identify this cache object - for example:
Your cache key
can be:
A cache key cannot exceed 128 characters.
Step 10 If you have multiple incoming payloads (typically where source data is paginated or has been through flow control), you should consider how these payloads are cached. The save all pages option determines cache behaviour for multiple incoming payloads:
Save all pages toggled ON. All incoming payloads are saved for your cache key. If you access the cache, you'll see each page listed with a page number - for example:
Save all pages toggled OFF. Data associated with the given cache key is overwritten each time one of the multiple payloads is saved - so only the final payload is saved - for example:
Step 11 Set the append option as required. If this option is toggled ON, incoming data is appended to the existing cache key each time an update is made. If this option is toggled OFF, the cache key is overwritten with new data each time.
Step 12 Save changes. The add to cache shape is added to your process flow, displaying the given name and key - for example:
When a process flow runs, the payload for received data flows through to subsequent steps. In a straightforward scenario we pull data from one connection, then perhaps apply filters and/or scripts before mapping/transforming data fields and finally pushing the payload into a target connection. This is a very linear example - we start with a payload and it flows all the way through to completion.
However, more complex scenarios might need to use a payload that was generated several steps previously, or even from a different process flow. This is where the and shapes come in.
Wherever you place an add to cache shape shape in a process flow, it will cache (i.e. store a copy of) the payload as it stands at that point in the process flow. You can then use a load from cache shape to reference this payload elsewhere in the same process flow and/or in other process flows for your organisation (depending on how the add to cache shape is ).
For more information please see:
We've already noted how the shape can be added to a process flow to cache the entire payload at a given point in the flow. The default behaviour is that when a process flow runs and hits an add to cache shape, any existing data associated with that cache is overwritten with a new payload from the new run.
However, it is possible to append data to a cache, so each time the process flow runs and the add to cache shape is reached, the current cache is appended to the existing cache. This works for any cache type (flow, flow run, and company).
Paginated data. If your connection shape receives paginated data, it's important to understand how the save all pages option works in conjunction with append. For more information please see our .
Cache size. Theoretically, if a cache is set to append data and then runs on a regular basis indefinitely, the cache size may grow to an unmanageable size. With this in mind, a limit is in place to ensure that a single cache cannot exceed 50MB.
Append data format. Appending cached data is supported for JSON only.
Shared caches. The append to cache operation is not atomic - as such we advise against multiple process flows attempting to update the same cache at the same time.
To use the append option, follow the steps below.
Step 1 - create your cache, then select it and add your cache key.
Step 2 Ensure that the save all pages option is set as needed. For more information about how this option affects appended data please see our .
Step 3 Enable the append option:
Step 4 A path to append to field is displayed:
Here, you need to consider the structure of the payload that you're passing in and specify a path that ensures that each new payload is appended in the right place.
Step 5 Save the shape. Next time the process flow runs the data will be cached and appended.
If you choose to view the payload for an add to cache shape, the payload will always show data from the latest run - for example:
However, when you add a load from cache shape, the payload will show ALL appended data so far - for example:
Option | Summary |
---|---|
Status | Display colour |
---|---|
Channel | Outcome |
---|---|
Option | Summary |
---|---|
Field | Summary |
---|---|
Information | Track data shape | Tracked fields |
---|---|---|
Action | Flow behaviour |
---|---|
In this list, you'll find any caches that have been added to this process flow (via the shape), together with any caches that have been .
Your given cache key might be static or dynamic, depending on how the cache was configured in the corresponding :
Cache key | Summary | Example |
---|
For detailed information about each of these approaches, please see
The cache key must be associated with an existing shape, either in the same process flow or (in the case of company-level caches) in another process flow.
Step 6 If the cache that you're loading was with the save all pages option toggled ON, you should toggle the load all pages option ON when loading this data:
When paginated data is pulled from a , a payload is created for each page. If the save all pages option is toggled ON when a cache is created, the payload for each page is saved to its own cache key (with key names generated dynamically from a specified key and page numbers). If the save all pages option is toggled OFF, all pages are saved to a single cache key. For more information please see our .
Yes. As with any other process flow shape, you can view the associated payload for a load from cache shape after the process flow has run. To do this, click the shape's tick icon and then select the payload tab in the - for example:
On this page we focus on paginated data however, the same principles apply whenever multiple payloads are cached, irrespective of whether those payloads are generated via pagination or some other means (for example, via the shape).
When paginated data is pulled from a , a payload is created for each page - you can see these in the :
Save all pages | Append to cache | Cache behaviour |
---|
For information about setting the append option, please see our page.
Action | Outcome |
---|
is an easy way to batch incoming data into single-record payloads, however you may prefer an alternative approach. The important point is that the add to cache shape must receive single-record payloads - how you achieve this is up to you.
Any combination of , and variables can be used to form cache key names.
Follow the steps below to configure an add to cache shape with a for generating dynamic cache keys.
Step 1 , where required.
For more information on these fields please see the page.
Step 5 Move down to the cache key field and enter the required key. Here, you use standard to define your target data element:
Our example uses dynamic payload variables however, you can also use metadata variables and/or flow variables. For more information please see section.
Cached data can be loaded via our load from cache shape. Please refer to the section for more information.
How long a cached payload remains available depends on the cache level selected when you configured .
When a process flow hits an add to cache shape, all data from the incoming payload is cached. With this in mind, ensure that your incoming data is , and/or as required.
The default behaviour is for the existing cache to be overwritten each time it is updated. Please see the page for information about appending data.
Cache level | Summary |
---|
Cache key | Summary | Example |
---|
If you are adding a company-level cache, you may want to make a note of the key that you specify here, so it can be shared with other users in your organisation who may want to .
It's important to understand how the save all pages option works in conjunction with the append option. If you aren't sure, please see our page before proceeding.
For more information see our .
Yes. As with any other process flow shape, you can view the associated payload for an add from cache shape after the process flow has run. To do this, click the shape's tick icon and then select the payload tab in the - for example:
If you place an add to cache shape before a shape which generates multiple payloads (typically, a ), you can see each payload that is created via the payload dropdown - for example:
Cached data can be loaded via our load from cache shape. Please refer to the section for more information.
If required, can be specified here.
Success
Green
Info
Grey
Warning
Orange
Error
Red
Email +Log
The defined notification message is sent to any specified email notification groups AND output to run logs.
The defined notification message is sent to any specified email notification groups. It is NOT output to run logs.
Log
The defined notification message is output to run logs. Email notification groups are not available for selection (so no emails are sent).
Follow all matching routes
If a record matches defined conditions for multiple routes, send it for onward processing down all matched routes.
Follow first matching route only
If a record matches defined conditions for multiple routes, send it for onward processing down the first matched route, but no more.
Receive / send
The flow direction for the associated tracked data, as defined in the track data shape.
The flow direction for the associated tracked data, as defined for the associated connector endpoint.
Date & time
The date & time that this data was tracked.
The date & time that this data was tracked.
Success / fail
If success criteria filters are defined in the track data shape, the success/failure marker is determined by the outcome of these.
If success criteria filters are NOT defined in the track data shape, the default marker is success
.
Defaults to success
for all tracked data.
Message
If a success criteria message is defined in the track data shape, it is shown here.
N/A
Latest deployed version
Always run the latest deployed version of this process flow. So, if the called process flow is edited and re-deployed at any point, the latest deployed
version will always be called.
Latest draft version
Always run the latest draft version of this process flow. So, if the called process flow is edited at any point, the latest edited (draft
) version will always be called.
Specific version
Select a version from the dropdown list. With this approach, keep in mind that this version is always called - do if you update this process flow subsequently, you will NOT be running the latest draft
or deployed
version.
OFF | OFF | The given |
ON | OFF | The first time that multiple payloads are received, each one is saved to its own Next time the cache receives data, any existing As such, each |
OFF | ON | Each payload is appended to your specified |
ON | ON | The first time that multiple payloads are received, each one is saved to its own Next time the cache receives data, any existing As such, data associated with existing |
As you work with shapes in process flows, you'll be used to updating shape settings with required values. Mostly, you'll define static values - for example, selecting or entering a data item to be used as a de-dupe key, entering a key name for an add to cache shape - there are dozens of settings that you might configure when building process flows.
However, there may be times where the value of a field can't be defined as a static value because it needs to be resolved dynamically, based on data received from the incoming payload and/or defined for the process flow as a whole. This can be achieved using:
Lots of our process flow shapes include settings where you can enter a static value, or provide a variable/parameter to be resolved dynamically. Please refer to our shapes documentation for specific guidance for each shape.
Source instance Source endpoint
If data is coming into the process flow via a connector shape, use these dropdown fields to select appropriate source connector details (i.e. the same instance and endpoint as configured for the previous connector shape). If data is coming into the flow via a non-connector source (such as a manual payload, API request, or webhook) then leave these fields blank.
Entity
If data is coming into the flow via a connector shape, this field will be set as required by default. Otherwise, select the entity type associated with the data field(s) that you want to track.
Note that the selection made here has no impact on how the shape performs - it simply determines how the tracked field is categorised in tracked data summaries.
Direction
If data is received via a connector shape, this field will be set as required by default. Otherwise, select the flow direction (send
or receive
) associated with the data field(s) that you want to track:
If the tracked data is being pulled from a source, set this option to receive
If the tracked data is being pushed to a destination, set this option to send
Note that the selection made here has no impact on how the shape performs - it simply determines how the tracked field is categorised in tracked data summaries.
Field paths
Define one or more data fields to be tracked - i.e. fields that you may want to look up in the event of a query.
Succeed as partial success
The flow completes and, where possible, data is synced.
Failed payloads for exceptions are removed and are available from run logs.
The flow is logged with a status of partial success
.
Fail flow
The flow completes and, where possible, data is synced.
Failed payloads for exceptions are removed and are available from run logs.
The flow is logged with a status of failure
.
In this case, the flow is marked as a failure
(and will show as such in run logs) but it's important to note that this does not cause the process flow to stop - any valid payloads continue through the flow.
It's likely that you would only use this option instead of succeed as partial success
if logging any failed payloads as a general flow failure is important from a reporting/metrics perspective.
Fail flow & retry
A retry will only happen if the process flow is enabled and deployed, and is NOT triggered manually.
The flow completes and, where possible, data is synced.
Failed payloads for exceptions are removed and are available from run logs.
The flow is logged with a status of retried
.
The flow is retried with ALL data.
The flow is retried ONCE only.
Static | Data is cached to the key exactly as it is specified. Typically used when your aim it to load the entire cache later in the flow (or in other flows). |
|
Dynamic |
|
cache: customerData cache key: customer-1000000001
cache: customerData cache key: customer-1000000002
cache: customerData cache key: customer-1000000003
cache: customerData cache key: customer-1000000004
The incoming payload is batched into multiple payloads - one payload per data element (e.g. one order per payload, one customer per payload, one product per payload, etc.). |
Configure the add to cache shape and specify a payload variable as the cache key, where the variable looks for the first occurrence of a uniquely identifying element in the payload (typically an id or reference number). | The add to cache shape receives and caches single-record payloads from the flow control shape. The cache key for each payload is generated dynamically by resolving the payload variable from each incoming payload. |
Flow run | The data associated with this cache is only available while the process flow is running. When a process Enabled & deployed process flows
In this case, the Draft/inactive process flows In this case, we use a TTL (Time to Live) with a default of 2 hours to determine when the cached data is deleted. There's no chance that |
Flow | Data in the cache is retained after the process flow is run, so it can be loaded again within this process flow if required.
Cache retention
When you choose to add a |
Company | The data associated with the latest update to this cache is available for use in this process flow and in any other process flows created within your company profile. |
Static | Data is cached to the key exactly as it is specified. Typically used when your aim it to load the entire cache later in the flow (or in other flows). |
|
Dynamic |
|
This approach assumes that the cache to be loaded was added with a payload variable for the cache key, and is comprised of multiple, single-record payloads (having been through a flow control shape).
Each of these payloads has its own, unique cache key
(when data was added to the cache, this key was generated dynamically by resolving a cache key
payload variable).
For more information about this stage, please see Generating dynamic cache keys with payload variables.
When we come to load this data, we must target the required cache keys. In the same way that we use a payload variable to add data to a cache with dynamic cache keys, we can use a payload variable to load data from these keys.
To do this, you configure a load from cache shape with a 'multi-pick' payload variable in the cache key
, and ensure that data passed into this shape contains the values required to resolve this variable.
In summary, you can drop a single load from cache shape into a process flow and specify a payload variable as the required cache key
. This must be in the form:
...where <element>
should be replaced with whichever data element you will be passing in to to resolve the cache key. For example:
The <element>
defined here will be the same data element that was specified in the payload variable for the corresponding add to cache shape.
You then need to pass in any <element>
values that should be used to resolve required cache key
names. This might be achieved via a connection shape (if values are being generated from another system), or perhaps a manual payload shape. Whichever shape you use must be placed immediately before the load from cache shape.
To help understand how this approach works, we will step through an example.
Suppose we have the scenario where a process flow has been built to receives incoming orders, and another process flow needs to target specific orders received from this flow.
Process flow 1: Add to cache
To allow the second process flow access to orders processed by the first, we must add all incoming orders to a company
type cache in the first process flow (remember that company
type caches can be accessed by any other process flow created for your company profile). To ensure that we can go on to target specific orders from this cache later, we will cache every order in its own cache key, using a payload variable.
Process flow 2: Load from cache To retrieve specific orders from the cache created in the first process flow, we will pass the required order ids into a load from cache shape. These ids will be used to resolve dynamic cache keys, using a payload variable.
Here, we will batch an 'orders' payload into single order payloads - then we'll add each payload to its own cache key, which is created dynamically from a payload variable. Let's break these steps down:
Here, we will pass the required order ids into a load from cache shape. These ids are then used to resolve dynamic cache keys (via a payload variable) to determine which orders should be loaded. Let's break these steps down:
If required, you can import existing data into a de-dupe pool. For example, you may have records that you know have been processed elsewhere and want to ensure that they aren't processed via Patchworks.
Conversely, you can export de-dupe pool data to a CSV file, for use outside of Patchworks.
De-dupe data exports are completed in CSV format, delimited ONLY with a single comma between fields.
The exported file includes two columns with value
and entity_type_id
headers. For example:
When de-dupe data values are imported:
All records in the import file are added to the data pool as new items
Any existing items in the data pool are unchecked and unchanged
To import de-dupe values, the import file must be in the same format as export files above, with the same headers. I.e.:
Where:
The value
is the key field value that you are matching on
The entity_type_id
is the internal Patchworks id for the entity type associated with the key field that you are using to match duplicates. This id must be present for every entry in your CSV file. You can download a list of ids by following steps detailed later in this page.
Import files cannot exceed 5MB.
To export/download a de-dupe data pool, follow the steps below.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 Click the name of the data pool that you want to export:
Alternatively, you can create a new data pool.
Step 3 With the data pool in edit mode, move to the lower tracked de-dupe data panel and click the download button:
Step 4 The download job is added to a queue and a confirmation message is displayed:
Step 5 When your download is ready, you'll receive an email which includes a link to retrieve the file from the file downloads page. If you can't/don't want to use this link, you can access this page manually - click data pools in the breadcrumb trail at the top of the page:
...followed by the settings element option:
Step 6 Select the file downloads option from the settings page:
Step 7 On the file downloads page, you'll find any exports that have been completed for your company profile in the last hour. Click the download button for your job - the associated CSV file is saved to the default downloads folder for your browser.
This list may include exports from different parts of the dashboard, not just data pools (for example, run log and cross-reference lookup data exports are added here).
Step 8 Click the download button for your job - the associated CSV file is saved to the default downloads folder for your browser.
Download files are cleared after one hour. If you don't manage to download your file within this time, don't worry - just run the export again to create a new one.
If you want to import data into a de-dupe data pool, you need to ensure that each record in your CSV file includes an entity_type_id. To find which id you should use, follow the steps below to download a current list.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 Click the download entity types button at the top of the page:
Step 3 A CSV file is saved to the default downloads folder for your browser.
To import data into a de-dupe data pool, follow the steps below.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 If you want to import data into an existing data pool, click the name of the required data pool from the list:
Alternatively, you can create a new data pool.
Step 3 Move to the lower tracked de-dupe data panel and click the import button:
Step 4 Navigate to the CSV file that you want to import and select it:
Step 5 The file is uploaded and displayed as a button - click this button to complete the import:
Step 6 The import is completed - existing values are updated and new values are added:
You may need to refresh the page to view the updated data pool.
There may be times when you need to define variables or parameters for a process flow shape which resolve dynamically, based on given values from the incoming payload.
For example, you might have a list of customer IDs coming in from a source (for example, an inbound API job), and need to match these IDs with payload data for a subsequent connection shape, in order to create customer records. This would be achieved using a specific payload syntax when defining variables in the connection shape.
To pass in a variable or parameter value from the incoming payload, use the syntax shown below:
...where schema notation
should be replaced with the relevant notation path for the required field in the incoming schema. A payload variable can be defined on its own, or combined with static text. For example:
If necessary, you can combine payload variables with metadata and/or flow variables - for example:
The [[payload]]
variable supports non-JSON payload data types - for example, raw text, CSV, XML - whatever you pass in will be output.
To show how this works in principle, some examples are detailed below:
However, it's important to note that the required settings will depend on the data schemas used for your selected connection endpoints.
At the most basic level, your incoming data might contain items which aren't nested - for example:
In this case, our variable would be defined as:
...as shown below:
The result for our example would be:
An incoming payload might contain the required element in an array and you want to target all items within it - for example:
In this case, we can define a variable to target the required array. For example:
...will produce a comma separated list of associated customerID
values. The result for our example would be:
An incoming payload might contain the required element in an array and you only want to target a single item - for example:
In this case, we can define a variable to target the required array item. For example:
...will target the first item in the array and return the associated customerID
value. The result for our example would be:
Whereas:
...will target the second item in the array and return the customerID
value. The result for our example would be:
Let's take our example below:
We've seen how we can target specific items in an array, and how we can list all items in an array - but what if we wanted to target all array items individually?
In this case, we would add a flow control shape to split the incoming payload into batches of 1
at the required data element level (in the case of our example this would ne users
:
This will result in three payloads, each with a single customerID
and name
. For example:
In the following connection step (where the variable/parameter is defined), we can add our payload syntax for the variable/parameter. For our example this would be:
...as shown below:
Here, we need to specify the * because each of our batched payloads is wrapped in an array. For each payload generated from our example, the result is that the associated customerID
is taken as the variable value.
This approach assumes that the cache to be loaded was added with a payload variable for the cache key, and is comprised of multiple, single-record payloads (having been through a flow control shape).
Each of these payloads has its own, unique cache key
(when data was added to the cache, this key was generated dynamically by resolving a cache key
payload variable).
For more information about this stage, please see Generating dynamic cache keys with payload variables.
When we come to load this data, we must target the required cache key. If you only want a single item, the quickest way is to specify the resolved cache key.
The load from cache shape works as normal - you choose the cache
and cache key
to be loaded:
However, the important point to consider is that the cache key
that you specify here will have been generated dynamically by resolving the payload variable that was specified when the cache was added.
Consider the following process flow:
Here, our manual payload contains customer data as below:
To allow us to target specific customer records from this payload, we send it through a flow control shape, which is set to creating one payload for customer:
...so now we have lots of payloads to be cached:
If we look at the payload for the first of these, we can see it contains a single customer record - notice that there's an id
field with a value of 1000000001
. This field uniquely identifies each record.
Next we define an add to cache shape - we create a new cache and use a payload variable to generate a dynamic cache key for each incoming payload:
Here, they payload variable is defined as:
where:
customer-
is static text to prefix the resolved variable.
[[payload.]]
instructs the shape that this variable should be resolved from the incoming payload.
0
denotes that the first occurrence of the following item found in the payload should be used to resolve this variable.
id
is the name of the field in the payload to be used to resolve this variable
So, if we take our first payload above:
...our payload variable would resolve to the following cache key:
This is what we use in our load from cache shape:
If caches have been added to your process flow or company-level caches have been added for use in any process flow, you can reference these in field mapping transformations.
Using a cache lookup transformation function, you can look up values from a cache and map them to fields in a target system.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when referencing a cache we don't select a source field - the specified cache data is our source.
Step 1 In your process flow, access settings for the map shape that you want to update:
Step 2 Click the add mapping rule option - for example:
Step 3 Click the add transform button:
Step 4 Click the add transform button:
Step 5 Click in the name field to access a list of all available transform functions, then select cache lookup:
Step 6 Cache reference fields are displayed:
Complete these fields using the table below as a guide:
Step 7 Accept your changes:
...then save the transformation:
Step 8 Now you can select a target field in the usual way. Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the specified cache values will be mapped to the target field.
The steps detailed above show how to configure the cache lookup transform with a known cache key. However, it's possible to populate the cache key automatically, using the output from a previous transform function.
To do this, you add a mapping row in the usual way and define any required transform functions to produce the required value for cache keys. Once this is done, add a cache lookup transform function (as shown above) but leave the key
field blank.
When the key
field is blank, output from the previous transform function for the mapping is applied.
Suppose you have a cache where multiple cache keys have been defined in the form:
itemref
-last_name
For example:
1000021-Smith
Now suppose you want to define a cache lookup transformation which will determine the key by manipulating mapped fields. You would:
Add a mapping row with two source fields - one for itemref
and another for last_name
.
Select itemref
as the target field.
Add a concatenate transform function to join itemref
and last_name
fields with a hyphen.
Add a cache lookup transform function as defined above, but leave the key
field blank
When the process flow runs, output from the concatenate transform function will be applied as the key
for the cache lookup transform function.
The example above describes how you might use a concatenate transform function as the means to generate a cache key however, the output from any transform function can be used.
If you have defined custom scripts for use in process flows, use the script shape to select a script to apply at a given point in a process flow.
You can use any version of a script which has been saved and deployed.
Creating a custom script is an advanced feature which requires some in-house development expertise.
Step 1 In your process flow, add the script shape in the usual way:
Step 2 You're prompted to select an existing script:
Step 3 Select the script that you want to use at this point in the process flow:
The list of available scripts only includes scripts which are currently deployed for use.
Step 4 The latest deployed version of the script is added to the shape - for example:
Code is displayed in view-mode. If you need to change the script, save your shape now and then use the left-hand navigation bar to access process flows > custom scripts.
Step 5 Unless you have a specific reason to do otherwise, we advise using the latest version of scripts. However, if you do need to use a previous version of the script, select the 'versions' dropdown field to make your selection - for example:
Step 6 Save the shape:
To view/change the selected script for an existing script shape, click the associated 'cog' icon:
From here, the existing script is displayed - you can either select a different script, or a different version of the existing script:
Remember that the script code can't be changed here. If you need to change the script, save your shape now and then use the left-hand navigation bar to access process flows > custom scripts.
A script will time out if it runs for more than 120 seconds.
The de-dupe shape is used to identify and then remove duplicate entries from an incoming payload. For more background information please see our De-dupe shape page.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
Currently, the de-dupe shape supports JSON payloads.
To add and configure a new de-dupe shape, follow the steps below.
Step 1 In your process flow, add the de-dupe shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be de-duped originates - for example:
If your incoming data is via manual payload, API request, or webhook then you can remove any default source instance and endpoint selections:
Step 3 Move down to the behaviour field and select the required option.
For more information about these options please see our De-dupe shape behaviour section.
Step 4 Move down to the data pool field and select the required data pool.
If necessary, you can create a data pool 'on the fly' using the create data pool option. For more information please see Adding a new data pool via the de-dupe shape.
Step 5 In the key field, select/enter the data field to be used for matching duplicate records. How you do this depends on how the incoming data is being received - please see the options below:
The selection that you make here determines how the payload is adjusted when duplicate data is removed. For more information please see How duplicate data is handled.
Step 5 Select the payload format:
Step 6 Save the shape.
The de-dupe shape can be used to handle duplicate records found in incoming payloads. It can be used in three behaviour modes:
Filter. Filters out duplicated data so only new data continues through the flow.
Track. Tracks new data but does not check for duplicated data.
Filter & track. Filters out duplicated data and also tracks new data.
A process flow might include a single de-dupe shape set to one of these modes (e.g. filter & track), or multiple de-dupe shapes at different points in a flow, with different behaviours.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
The de-dupe shape is not atomic - as such we advise against multiple process flows attempting to update the same data pool at the same time.
The de-dupe shape works with incoming payloads from a connection shape, and also from a manual payload, API call, or webhook.
JSON and XML payloads are supported.
The de-dupe shape is configured with a behaviour, a data pool, and a key:
As noted previously, the de-dupe shape can be used in three modes, which are summarised below.
Data pools are created in general settings and are used to organise de-dupe data. Once a data pool has been created it becomes available for selection when configuring a de-dupe shape for a process flow.
When data passes through a de-dupe shape which is set for tracked behaviour, the value associated with the key field for each new record is logged in the data pool. So, the data pool will contain all unique key field values that have passed through the shape.
You can have multiple de-dupe shapes (either in the same process flow or in different process flows) sharing the same data pool. Typically, you would create one data pool for each entity type that you are processing. For example, if you are syncing orders via an 'orders' endpoint and products via a 'products' endpoint, you'd create two data pools - one for orders and another for products.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
The key field
is the data field that should be used to match records. This would typically be some sort of id
that uniquely identifies payload records - for example, an order id
if you're processing orders, a customer id
if you're processing customer data, etc.
When duplicate data is identified it is removed from the payload however, exactly what gets removed depends on the configured key field
.
If your given key field is a top-level field for a simple payload, the entire record will be removed. However, if the payload structure is more complex and the key field is within an array, then duplicates will be removed from that array but the parent record will remain.
Let's look at a couple of examples.
The de-dupe shape supports JSON and XML payloads.
This approach is the simplest - all incoming data is cached with a static cache key.
In the example below, all incoming customer records will be added to a cache named ALLcustomers
and a static cache key named customers
:
When the data is cached, it's likely that the cache will include multiple records - for example:
To retrieve this cache, we simply drop a load from cache shape where required in the process flow and specify the same cache and cache key that were defined in the corresponding add to cache shape:
This approach assumes The load from cache shape works as normal to retrieve cached data where the cache was created with a payload variable - you choose the cache name and key to be loaded:
However, the important point to consider is that the cache key that you specify here will have been generated from the payload variable that was specified when the cache was created.
If a payload variable has been used to cache data, you would typically have included a flow control shape to create multiple payloads - for example:
So you will have multiple cache keys that can be loaded. To do this, you can add one load from cache shape for every cache key
that you want to retrieve, specifying the required key in each case. For example:
Alternatively, you can add a single load from cache shape and target specific cache keys by passing in the required ids.
You can view and manage all existing caches from the data caches page - to access this page, select caches from the dashboard navigation menu.
During routine platform maintenance, cached data may be cleared. While we make a best effort to retain data for up to 7 days, it could be cleared sooner. Please design your process flows accordingly.
The data caches page is split into three sections: flow run caches, flow caches, and company caches:
Each cache is listed with the following details:
If you have a lot of caches, you can search by name:
To access cache details for a particular cache, click on its name:
When you select a cache from the list, an edit cache page is displayed:
From here you can:
To change the name of the cache, simply update the name field in the upper cache details panel, then click the save button.
When the name is updated and saved, the change is immediately reflected in any add to cache shapes in process flows, where this cache is used.
The cache name must not include full stop (.) or colon (:) characters.
You can use the maximum age slider to change the cache retention period for a cache:
Note that:
The maximum age for a flow run cache is 2 hours - this cannot be changed
The maximum age for a flow or company cache can be changed to anything up to 7 days
The usage panel shows general usage information about the cache:
Here you can see:
The cache contents panel displays an entry for each cache key update. Information shown varies, depending on the cache type.
The following details are displayed for each cache item in a flow run-level cache:
The following details are displayed for each cache item:
To clear all current content in the cache, click the clear cache button:
This removes any existing data but leaves the cache in place so it can still be used in process flows.
You can add notes to any shape in a process flow. This can be useful for many reasons - for example, to keep track of why certain shapes are used in more complex flows; to add reminders for any updates needed in future, or perhaps to leave guidance for another user.
You can add a single note or multiple notes to each shape - there's no limit
Any single note cannot exceed 64kb
Notes are not encrypted - we strongly advise against adding sensitive information such as API keys, login credentials, payload data, etc.
Notes are associated with a shape, not the process flow version, so if you add notes to shapes in a draft process flow and then deploy that process flow, the notes remain in the deployed version.
To access shape notes, click the notes icon for the required shape:
Any existing notes for this shape are displayed:
From here, you can click any existing note to open it in the notes editor, or use the add note button to add a new note.
To add a new note for a shape, click the shape notes icon for the required shape and then click the add note button:
From here you can add required content via the notes editor, then save your note.
Notes are added using a markdown editor, which includes a preview pane for rendered output:
Standard markdown formatting can be used, or you can use the notes editor toolbar to apply formatting and add elements such as code blocks, tables and flow charts.
Click here to view a markdown cheat sheet.
When adding/editing a note, you can apply a colour - perhaps useful to categorise different types of notes with a visual cue.
You can also mark a note as private (so only you can see it), or make it available for all users in your organisation. Once a note is saved, the rendered version is displayed in the notes panel.
Notes persist to subsequent versions of a process flow. For example, if you add two notes to a draft
process flow, then deploy that flow, the deployed
process flow will have two notes.
If you go on to add one more note to the current draft
version and then deploy this draft, the deployed
process flow will have three notes. The inactive
version will have two notes.
For more information on process flow versions, see our Process flow versioning page.
Data pools store data entities that have been tracked via de-dupe shape and track data shapes.
Data pools are created and managed via the data pools option in general settings. From here you can add a new data pool, or view/update an existing data pool.
For more background information on data pools please see de-dupe shape and track data pages.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
De-dupe data pools can be created in two ways:
You can access existing data pools from general settings.
Step 1 Select the settings option from the bottom of the dashboard navigation bar:
Step 2 Select data pools:
...all existing data pools are displayed:
For each data pool you can see the creation date, and the data that it was last updated by a process flow run.
Step 3 To view details for a specific data pool, click the associated name in the list:
...details for the data pool are displayed:
In the top panel you can change the data pool name/description (click the update button to confirm changes) or - if the data pool is not currently in use by a process flow - you can choose to delete it.
In the lower panel you can see all data in the pool. This data is listed with the most recent entries first - the following details are shown:
The steps required to reference flow variables in a process flow can be summarised in two stages:
Any flow variables that you want to reference from process flow shapes should be added as variables within the process flow settings. To do this, follow the steps below.
Step 1 Access the process flow that you want to update and make sure that you're switched to the required version.
Typically, you would update the draft version, then deploy changes when you are ready.
Step 2 Select settings (the cog icon) from the actions bar:
Step 3 Look for the variables section in the flow settings panel - for example:
Step 4 Click the add new variable button:
Step 5
In the name field, enter the name (i.e. the API parameter name) of this variable. For our example, the variable is named customerID
:
Step 6 Click in the select a type field and select the data type for this variable:
Step 7 Enter the required value to be used wherever this variable is found in the process flow - for example:
Step 8 Add all required flow variables in the same way, then save changes.
Having defined your required flow variables, they can be referenced from process flow connection shapes, wherever a variable field is present. So, if your connection shape is set to use an endpoint which requires/allows variables to be applied, you will see corresponding variable fields in the connection shape settings.
The example below shows how this works. A GET single order
endpoint has been configured to expect a customerID
variable, and then how this variable is surfaced in connection shape settings when this endpoint is used:
When defining variable values for a connection shape, you can enter a static value, or obtain values dynamically from a payload, or reference an existing flow variable. The steps below show how to reference a flow variable.
You can also reference flow variables in custom scripts (which means you can manipulate these values however you need) and also in field mapping transformations.
Step 1 In your process flow, access settings for the connection shape that you want to update with a flow variable:
Step 2 Look for the variables section in the settings panel - for example:
To use a flow variable here, the expected variable must correlate with a variable that you added in stage 1. Notice that the example above is expecting a Customer ID variable, which correlates with the customerID
flow variable that we added in step 5 of stage 1.
Step 3 Use the syntax below to reference a flow variable:
...where the variable
element should be replaced with the name of the flow variable defined in process flow settings (stage 1). Using our example, this would be:
Step 4 Save changes. Now when this process flow runs, the value defined in process flow settings will be passed in for this variable.
Flow variable values can also be updated by custom scripts. For further information please see Referencing flow variables in custom scripts.
If your process flow includes a connection shape that's configured for an endpoint where a variable can be entered, you'll probably be used to entering a static value to be applied for that step - for example:
You might also be familiar with obtaining variable values dynamically, from a payload. However, flow variables provide another level of flexibility.
Flow variables provide the ability to define variables at the process flow level, and then reference these values throughout the entire process flow. You set a flow variable once, and it is applied throughout the entire process flow, wherever it is referenced.
When flow variables are modified - either manually or via a script - those updates are applied anywhere in the process flow where they are referenced, automatically.
You can also reference flow variables in custom scripts (which means you can manipulate these values however you need) and also in field mapping transformations.
Before you start working with flow variables, there are a couple of important points to understand regarding process flow versions and update persistence.
Flow variables are version-specific. For example, if you add flow variables to the current draft version and later restore an inactive version to draft, any defined flow variables won't be present. So, make sure you're updating the correct version of a process flow. For more information, please see our Process flow versioning page.
If you update a flow variable via a script, those updates persist for the duration of the flow run. Once the process flow has been completed, default values are restored.
Please see the following pages:
Payload metadata can be added by custom scripts, and also by the set variables shape.
There may be times when you need to define variables or parameters for a process flow step using values from the incoming payload metadata. This can be achieved using a specific payload syntax in the process flow connection shape.
Payload metadata cannot exceed 10240 bytes. Exceeding this limit will cause the process flow to fail on the associated step.
Payload metadata can be accessed via a variable or parameter field using the syntax below:
For example:
When a payload is passed to a 'child' process flow via the run process flow shape, meta variables are included.
If multiple fields are specified, these values are tracked as one, concatenated value. To track multiple fields separately, use one shape per field.
If data is received via a connector shape, you can navigate the associated data structure to select a field for tracking - for example: If data is received via a non-connector source (such as a manual payload, API request, or webhook), enter a path to the required field manually - for example:
Use this option with care. When the process flow is retried, all data is processed again. If you have any doubt as to whether duplicate records will be handled correctly, we advise using a different action
and manage exceptions separately (for example, add exceptions to a cache and process from there).
The cache key resolves dynamically based on a payload variable. Typically used when your aim it to load single or multiple items from the cache later in the flow (or in other flows). For more information please see our .
Place a shape immediately before the add to cache shape and configure it to create batches of 1 at the appropriate level for your data.
It's not currently possible to access different versions of a cache. So, each time a process flow runs with the same add to cache shape, the payload for that cache is overwritten/ with the latest data and it's this that will be available to load from a company
cache.
Cache retention
When you choose to add a company
cache, retention options are available so you can decide how long cached data should be retained (you can set a time limit in seconds, minutes, hours, or days).
The default setting is 2 hours. This can be updated to a maximum of 7 days.
The cache key resolves dynamically using variables Typically used when your aim it to load single or multiple items from the cache later in the flow (or in other flows). For more information please see our .
Field | Summary |
---|---|
Mode | Summary |
---|---|
Item | Summary |
---|---|
Item | Summary |
---|---|
Item | Summary |
---|---|
Item | Summary |
---|---|
Column | Summary |
---|---|
Cache
Use the dropdown list to select the cache that you want to reference. Available caches will be:
All flow-level caches added for this process flow
All company-level caches added from any process flow
All flow run-level caches created for this run
Key
Enter the key that was specified in the add to cache shape for the cache that you want to access here. Alternatively, if this transformation is preceded by another transformation function, you can leave this field blank and pick up a value from the output of the previous function. For further information please see the Using output from a transform as the lookup cache key section.
Lookup
You can use dot notation to look up specific elements from the cached payload. If you leave this field blank, the full cached payload is retrieved.
Default
If required, specify a default value to be used if the cache lookup transform doesn't find a value to return.
Load all pages
When paginated data is pulled from a connection shape, a payload is created for each page. If the save all pages option is toggled ON when a cache is created, the payload for each page is saved to its own cache key (with key names generated dynamically from a specified key and page numbers). The load all pages option here can be used if you want to lookup all of these pages.
Fail on miss
If this option is togged ON, the map shape (and therefore the process flow) will fail if the cache lookup can't be fulfilled.
Filter
Remove duplicate data from the incoming payload so only new data continues through the flow. New data is NOT tracked.
Track
Log each new key value received in the data pool.
Filter & track
Remove duplicate data from the incoming payload AND log each new key value received.
Size
The current size of the cache, shown with a percentage use indicator. The maximum cache size is 50MB
Created
The date and time that the cache was created.
Last accessed
The date and time that the cache was last accessed. This timestamp is updated even when no data was added to the cache.
Keys
The number of keys associated with this cache.
Step 1: Manual payload
The manual payload shape contains an 'orders' payload with 17 orders in total.
Step 2: Filter
The filter shape ensures that orders are only processed if the id
field is not empty.
Step 3: Flow control
The flow control shape is set to create batches of 1 from the payload root level - so every order will be added to its own payload.
Step 4: Add to cache
The add to cache shape is defined to add to a company
type cache, named CPT-722. The cache key
is created dynamically, where the first part is always order-followed by the value of the first id element found in the incoming payload. E.g. order-5697116045650
. All data from the incoming payload will be added to this cache key. Taking our example using flow control, the incoming payload will only ever be a single order.
Step 5: Run flow
When this process flow runs, checking payload information for the add to cache shape shows that 17 payloads have been cached - one payload for each order.
Step 1: Manual payload
The manual payload shape contains two order ids that we want to load from our cache.
Step 2: Load from cache
The load from cache shape is configured to load data from our CPT-722 cache, targeting dynamic cache keys from order-[[payload.*.id]]
. Here, the required cache key(s) will be resolved from all (*
) ids found in the incoming payload - in this case order-5693105439058
and order-5697116045650
.
Step 3: Run flow
When this process flow runs, checking payload information for the load from cache shape shows that two payloads have been loaded - one for each of our given ids.
Name
The name that was specified when the cache was created. Caches are created via the add to cache shape.
Flow
For flow
and flow run
caches, this is the name of the process flow which is using the cache. This information is not shown for company
caches, as the cache might be used in any process flow within a company profile.
Created
The date and time that the cache was created.
Last accessed
The date and time that the cache was last accessed by a process flow. The cache may or may not have been updated with data at this time (even if there is no data to be added, the access date/time is logged).
Keys
The number of cache keys associated with this cache. Cache keys are created via the add to cache shape.
Size
The current size of the cache in proportion to the limit.
Delete the cache (and all associated data).
Flow Run ID
The unique id of the process flow run that updated the cache key.
Started at
The date and time that the process flow run was started.
Key
The cache key name.
Page
If multiple pages are added to a cache (for example, if incoming data is paginated or batched via flow control shape) and the add to cache save all pages option is togged ON), each page is listed individually.
Unique key
The internal cache key.
Size
The size of the cache key.
Key
The cache key name.
Page
If multiple pages are added to a cache (for example, if incoming data is paginated or batched via flow control shape) and the add to cache save all pages option is togged ON), each page is listed individually.
Unique key
The internal cache key.
Size
The size of the cache key.
Value
Created by
The name of the process flow where this entry was tracked into the data pool. Click this name to open the associated process flow.
Updated at
The date and time that the record was added to the pool (UTC time).
Click the 'eye' icon to view the content associated with this key. For example:
Click the 'eye' icon to view the content associated with this key. For example:
The value of the field that was identified as a match for duplicate records. This is the field defined as the key
to be used for de-dupe shapes - for example:
In this example, the de-dupe ke
y is set to id
, so the value
field shown in the data pool will display id
values.