Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Process flows are built by dragging and dropping shapes onto a canvas, and then configuring those shapes to work in the way you need to exchange data between connector instances.
Process flows are extremely flexible. You can build something very simple to sync data between two instances with standard field mappings - or build more complex flows, perhaps using custom scripts and/or routing data to different paths based on given conditions.
Before you start building a process flow, make sure that you've installed a connector and added your required instances for any third-party applications that you want to use.
The flexibility of process flows means that there's no 'one size fits all' approach - everyone's requirements are different, and the scope is huge. This level of flexibility is a great advantage but on the flip side - where do you start?
Here, we outline the bare bones of a process flow so you know what to consider as a minimum when getting started for the first time.
A scratchpad area will be available soon. In the meantime, we suggest registering for a sandbox account and experimenting there.
Make sure you create instances with credentials for your third-party application sandbox accounts, rather than live ones!
In their simplest form, process flows are defined to receive data from one third-party application and send it to another third-party application, perhaps with some data manipulation in between. Key elements are summarised below.
Process flows allow you to build highly complex flows with multiple routes and conditions. Here, we're considering an entry-level scenario to highlight key items as you get started with process flows.
Process flow can be associated with three version types: draft, deployed and inactive. Before you get started building process flows, we advise reading our Process flow versioning page to make sure you understand how this works.
In this section, we share some insights on best practice for different aspects building process flows.
This page summarises best practice insights for for working with scripts in process flows. Keep in mind that scripts come in two 'flavours':
Payload scripts. A script shape is configured to run a given script on the incoming payload.
Transform scripts. A script transform function is added to a field mapping (in the map shape) - this runs a given script on the associated source field before the mapping is completed.
Generally, doing everything you can in a single script is more efficient than processing multiple scripts - it means less downtime between steps, and less chance of being caught in a queue between scripts.
There are times where deploying multiple scripts is preferable from a management perspective - for example, if you're designing generic scripts to use across multiple process flows. In this case, you trade modularity for speed, in most cases.
Payload scripts (i.e. scripts run via the script shape) are more efficient than transform scripts.
Effectively, a transform script pauses the map shape, calls the script, then merges that response and continues with the map. Multiple script transforms results in multiple pauses - similar to running multiple scripts back-to-back in the process flow itself.
The Patchworks script editor allows you to edit scripts and test with a given payload, but for detailed debugging, we advise using an IDE with a debugger to step through the code.
Most endpoints have a unique or used to target a specific entity (for example, order_id
, customer_id
, product_id
, etc.). When these are defined, you can enter required values at runtime to target a specific item - for example:
Here, our flow is built to use a connector endpoint (retrieve specific product
) that includes a unique variable named ref
. When we access connector settings, this variable is surfaced so users can enter a value to target when the process flow runs.
When building process flows, we recommend building an alternative version which allows users to specify such an identifier in connector settings, so single items can be processed most efficiently in the event of a sync issue - for example:
To add a shape to a process flow, click the + icon at the point you want to place it - for example:
...then choose the type of shape to add:
To access settings for an existing shape, click the associated 'cog' icon - for example:
The settings panel is displayed, so you can configure the shape as required - don't forget to save changes.
When a shape is dropped into the canvas, it's labelled with a generic name - 'map', 'flow', 'split', etc. Sometimes it can be useful to modify these names to something more specific - for example, to give a hint of what the shape's purpose in your flow (particularly if you have multiple shapes of the same type!).
To change the name, simply access shape settings, then click the 'edit' icon associated with the name at the very top of the settings drawer - for example:
The name field can now be edited - update the current name as needed, then click save at the bottom of the settings drawer:
To remove a shape, click the associated 'cog' icon - for example:
...then click the delete option in the settings panel - for example:
When building and testing your process flows before going live, a common requirement is to build process flows that connect to development/staging instances of your third-party systems - this ensures that you're not working with live data during the testing phase.
This page details our suggested procedure for managing this approach in terms of:
Our suggested practice for adding process flows when you want to work with different environments is detailed in the following sections:
Step 1 or required connectors for third-party systems to be integrated in your process flows.
Step 2 of these connectors and specify your DEV/STAGE credentials (i.e. credentials that allow you to access data in your DEV/STAGE environment for the third-party system). It's a good idea to indicate DEV/STAGE as part of the instance name. For example:
Step 3 Move down to the labels section and apply a label that indicates that this process flow is configured for your DEV/STAGE environment - for example:
Step 6 Test carefully to ensure that the right data (i.e. DEV/STAGING data) is processed as expected.
Step 2 Update the process flow name as required. You may wish to indicate PRODUCTION as part of the process flow name but this isn't essential if you follow subsequent steps to apply labels.
Step 3 Move down to the labels section. Remove the existing DEV/STAGE label and replace it with a label that clearly indicates that this process flow is configured for your PRODUCTION environment - for example:
Step 4 Save changes.
Step 6 Check all other shapes in your process flow - if instance details are present, ensure that the PRODUCTION instance is selected.
Step 7 Check other shapes in your process flow. Typically, any instance information defined for a connection shape is inherited by subsequent 'child' shapes (which require instance details) in the flow. However, it's always worth double-checking before going live.
At this point, you now have different 'environment flavours' of the same process flow - for example:
If you need to update a PRODUCTION process flow, there are two options, summarised below.
This approach requires caution! Connections in the flow are configured for your live data, so you need to be sure that you've adjusted relevant filters during the testing phase.
With care, this option is fine for smaller changes. For more complex changes, Option 2 is the safest approach.
Remove the 'old' PRODUCTION process flow
With our process flow versioning system, you can be sure that a process flow that's currently deployed will never be edited (possibly with breaking changes) while it's in use.
To edit a deployed processed process flow, you take a copy as a and work on that - when you're ready, you can then .
Each time that you , the previously deployed version is saved as an version for future reference and, if required, future use.
For any process flow, there's always one , one , and any number of .
At any given time, a process flow can be associated with one of the following version types:
There is always one draft version of a process flow. The draft version can be edited freely without any possibility of changing or breaking the version that's currently . With a draft version, you can add/update , and change the process flow name.
Any settings defined for a draft version are ignored - draft versions are never triggered to run automatically.
When you're working with a draft version of a process flow, you can take the following actions:
If you choose to run the draft version of a process flow manually, the draft version runs and any target connections will be updated. Where possible, it's always best to use sandbox connections when you're editing and testing draft process flows.
The deployed version of a process flow is the one that's currently in use (if it's enabled) or ready for use (if it's disabled).
If you run an inactive version of a process flow manually, the inactive version runs and any target connections will be updated.
You can view all versions of a process flow via the settings panel.
When you access a process flow, the version being viewed is noted in the title bar. If you are viewing a deployed or inactive version, you'll see a message advising that edits cannot be made, and the version number is displayed beneath the title.
The version number is not the same as the version id.
To switch between different versions of a process flow, access the versions list and select the required entry.
Whilst it's often useful to refer back to an inactive version of a flow for a reminder of how things used to be set up, you may wish to remove older, inactive versions.
The process flow canvas is where you build and test your process flows in a smart, visual way. This is where you define if, when, what, and how data is synced.
The process flow canvas has four main elements - a , an , a , and a (hidden unless activated) :
Options in the actions bar are summarised below:
Depending on the shape, the settings panel will either open immediately so you can provide details before the shape is added to the canvas, or the shape is added to the canvas and you can when you're ready.
Settings vary for each shape - please see our section for more information.
Step 3 of these connectors again, this time using your PRODUCTION credentials (i.e. credentials that allow you to access data in your PRODUCTION environment for the third-party system). It's a good idea to indicate PRODUCTION as part of the instance name. For example:
Step 1 with the required name and description. You may wish to indicate DEV/STAGE as part of the process flow name but this isn't essential if you follow subsequent steps to apply labels.
Step 2 From the process flow canvas, access .
If required labels don't exist, you can create them 'on the fly' from here, or go to settings > labels for label management. More information is available in our section.
Step 4 Build the process flow. When configuring , ensure that you select the DEV/STAGING instance:
Step 5 as required. If you are required to select instances for other shapes in your flow, always ensure that you select the DEV/STAGING instance.
Step 1 When you're satisfied that your process flow is working correctly, .
Step 1 Edit the duplicated process flow and access .
If required labels don't exist, you can create them 'on the fly' from here, or go to settings > labels for label management. More information is available in our section.
Step 5 For every in the process flow, access settings and change the DEV/STAGING instance to the equivalent PRODUCTION instance:
Step 8 When you're ready, and the process flow.
Keep in mind that your determines the number of active (i.e. deployed and enabled) process flows that are allowed for each company profile.
Once your PRODUCTION version is in place and running, we advise that you the DEV/STAGE version but leave it in place to test any future updates.
Edit the of the existing PRODUCTION process flow and deploy changes once you're satisfied that the flow is running correctly.
Edit and test the existing DEV/STAGING version of the process flow. Once testing is complete, follow steps in , and above to duplicate the process flow and configure PRODUCTION connections. Having done this, you can either:
Retain the 'old' PRODUCTION process flow but apply an ARCHIVE and ensure it's .
the process flow. If you enable a process flow when viewing a draft version, there's no impact on the draft version. However, the version will start to run automatically as per its settings.
. Use this option to run the draft process flow immediately.
. When you deploy a draft version, it becomes the currently deployed version and stays as the current draft - the previously deployed version becomes a new version.
The deployed version of a process flow cannot be edited - can't be added/updated, and you can't change the name. The only actions that you can take with a deployed version of a process flow are:
the process flow. Just because a process flow version is deployed, it doesn't necessarily mean that it will be triggered to run automatically as per settings. For this to happen, a process flow must be both deployed AND enabled.
. Use this option to run the process flow immediately.
. When you do this, the process flow remains deployed and an exact copy is taken as the current version, ready for you to edit - the existing draft version is discarded. This is a good solution if you've been editing a draft but reached the point where you need to restart from a known sound point.
Each time a draft version of a process flow is , the previously deployed version becomes an inactive version - so you have a full version history for all deployed versions of a process flow.
An inactive version of a process cannot be edited - can't be added/updated, and you can't change the name. The only actions that you can take with an inactive version of a are:
the process flow. If you enable a process flow when viewing an inactive version, there's no impact on the inactive version. However, the version will start to run automatically as per its settings.
. Whilst you can use this option to run the process flow immediately, it's not recommended.
. When you copy an inactive version to , an exact copy is taken as the current version, ready for you to edit (the existing draft version is discarded). There's no impact on the version.
. When you deploy an inactive version, it becomes the currently deployed version. The previously deployed version becomes a new inactive version, and the existing draft is not affected.
. Whilst it's often useful to refer back to an inactive version of a flow for a reminder of how things used to be set up, having lots of process flows with lots of inactive versions can be detrimental to system performance. In this case, deleting older inactive versions can be useful.
Deploying the version of a process flow - or deploying an version without editing it as a draft first - is a simple one-click operation from the versions list.
If you want to edit the currently version of a process flow - or an version - you must first copy it to draft. The existing version is replaced by the version you copy.
The process flow title bar shows the name of the process flow, as specified when it was created. The number above the title is the process flow id and the number below is the version number. To change this title, use the .
The main 'shapes area' is where you build your process flow. Start by clicking the + sign associated with the , then choose the required shape for your next step, and !
We're adding new shapes all the time! For information about working with shapes, please see our section.
When you access or to configure a shape in your flow, available settings are displayed in a panel on the right-hand side. For example, when we choose to access settings for a shape, available trigger options are displayed:
Step 1 Log in to the , then select process flows > process flows to access the manage process flows page.
Step 1 Log in to the , then .
Step 1 Log in to the , then .
Step 1 Log in to the , then .
Step 4 The version transitions to (and the previously deployed version transitions to a new version). If the process flow is , it becomes active immediately and will run as per defined trigger shape settings.
Step 1 Log in to the , then .
Step 4 The version transitions to (and the previously deployed version transitions to a new version). If the process flow is , it becomes active immediately and will run as per defined trigger shape settings.
Step 1 Log in to the , then .
Step 4 The existing version is replaced with the version you copied, ready to edit. The or version that you copied are not affected.
Step 1 Log in to the , then .
Step 4 The existing version is replaced with the version you copied, ready to edit. The or version that you copied are not affected.
Draft The process flow is being built.
A new process flow is added
A deployed version is copied to draft
An inactive version is copied to draft
Deploy
Deployed The process flow is currently in use, or ready for use.
A draft version is deployed
An inactive version is deployed
Copy to draft
Inactive The process flow was previously deployed but superseded by a later deployment.
A draft process flow is deployed
An inactive process flow is deployed
Copy to draft
Deploy
Deleted
Shapes detailed in this section are available as standard, for all core subscription tiers:
Process flow settings
Use this option to access settings for this process flow - these are used to manage settings for this process flow as a whole. For more information please see Process flow settings.
Return to trigger
If you're working on a longer process flow, use this option to quickly jump back to the start (i.e. back to the trigger shape).
Use this option to run the process flow immediately. For more information please see Running a process flow manually.
Use this option to provide a payload to be passed in before the process flow initialises. This is particularly useful if you want to test how a flow will run with expected data from a Patchworks API request.
If a process flow is running, you can use this option to stop the current run. Stopping a run in this way triggers the flow to stop at its next step however, if an API call or script has already been triggered, the process flow will stop after these have completed. With this in mind, it's important to check any target connections to clarify what (if any) updates have been made after a process flow has been stopped.
As a process flow runs, you can view progress in real time; you can also check logs and view payloads at any stage.
When adding filters for a string-type field, there may be times when standard operators can't achieve what you need. For example, when defining multiple filter conditions, the default condition is AND - so ALL specified filters must be met for a match. But what if you need an OR condition?
For more complex filtering requirements, regex can be used.
This information applies wherever filters are available, not just the filter shape.
Let's take the following payload:
Suppose we want to retrieve any items where the value of the fruit
field contains peaches
OR apples
. We might be tempted to add two string-type filters in the filter shape:
However, this wouldn't return any matches because we'd be looking for any records where peaches
AND apples
are present. Instead, we can define one string-type filter with regex:
Process flow settings are used to manage settings and behaviour for the process flow as a whole:
The ability to update some settings will vary, depending on the version status. For example, you can't add flow variables to a deployed version.
Process flow settings are summarised below.
1
Process flow name & description
The name displayed for this process flow throughout the system. Optionally, you can include a description.
2
Queue priority
If required, you can use this dropdown field to select the priority in which runs for this process flow are picked from your queue. Choose from:
Highest
. This job is picked from the queue before anything else. If more than one job is queued with this priority, the one queued first is picked.
High
. This job is picked from the queue AFTER any highest
priority jobs are cleared.
Regular
. This job is picked from the queue at the first opportunity AFTER any high
and highest
priority runs are cleared. This is the default setting.
Low
. This job is picked from the queue AFTER any regular
priority jobs are cleared.
Lowest
. This job is picked from the queue AFTER any low
priority jobs are cleared.
3
Enabled toggle button
4
Use queued time
Some process flow steps (connectors, filters, set variables, transforms, etc.) can be configured to use dynamic/relative dates, where the date is relative to the time that the variable is used in the process flow.
To prevent cases where filtered records are omitted because they were added between the time a flow was initialised and the time it left the queue to run, the use queued time process flow setting can be used. This allows you to choose whether any relative dates should be based on:
the time that the variable is reached in the flow (i.e the current time)
the time that the process flow was queued
This option defaults to true
for all new process flows.
Example
To find all records created in the last 2 hours, a relative date variable is configured as:
At 12:00 the process flow enters the queue. At this point, the value of this variable would be: 10:00
At 14:00 the process flow leaves the queue and runs. At this point, the value of this variable would be: 12:00
So:
With the use queued at time option set to ON
, we take the time the flow entered the queue as our base for the relative date variable, so we'd retrieve all records created after 10:00
.
With the use queued at time option set to OFF
, we take the current time as our base for the relative date variable, so we'd retrieve all records created after 12:00
.
5
Remove failed payloads
6
Production flow
If this option is toggled ON, Patchworks will receive alerts if a run fails for this flow, which may be analysed to understand trends and areas for future enhancements.
7
Labels
8
Email failure notifications
9
Deploy
10
Variables
11
Versions
The connector shape is used to define which connector instance should be used for sending or receiving data, and then which endpoint.
All connectors have associated endpoints which determine what entity (orders, products, customers, etc.) is being targeted.
Any connector instances that have been added for your installed connectors are available to associate with a connector shape. Any endpoints configured for the underlying connector will be available for selection once you've confirmed which instance you're using.
If you need more information about the relationship between connectors and instances, please see our Connectors & instances page.
When you add a connector shape to a process flow, the connector settings panel is displayed immediately, so you can choose which of your connector instances to use, and which endpoint.
To view/update the settings for an existing connector shape, click the associated 'cog' icon to access the settings panel - for example:
Follow the steps below to configure a connector shape.
Step 1 Click the select a source integration field and choose the instance that you want to use - for example:
All connector instances configured for your company are available for selection. Connectors and their associated instances are added via the manage connectors page.
If you select an instance that's associated with a database connector, subsequent connector settings will vary from those detailed here - please see Configuring a database connection for more information.
Step 2 Select the endpoint that you want to use - for example
All endpoints associated with the parent connector for this instance are available for selection.
Step 3 Depending on how your selected endpoint is configured, you may be required to provide values for one or more variables.
Step 4 Save your changes.
Step 5 Once your selected instance and endpoint settings are saved, go back to edit settings:
Now you can access any optional filter options that are available - for example:
Available filters and variables - and whether or not they are mandatory - will vary, depending on how the connector is configured.
Step 6 The request timeout setting allows you to override the default number of seconds allowed before a request to this endpoint is deemed to have failed - for example:
The default setting is taken from the underlying connector endpoint setup and should only be changed if you have a technical reason for doing so, or if you receive a timeout error when processing a particularly large payload.
Step 7 Set error-handling options as required. Available options are summarised below:
Retries
Sets the number of retries that will be attempted if a connection can't be made. You can define a value between 0
and 2
. The default setting is 1
.
Backoff
If you're experiencing connection issues due to rate limiting, it can be useful to increase the backoff
value. This sets a delay (in seconds) before a retry is attempted.
You can define a value between 1
and 10
seconds. The default setting is 1
.
Allow unsuccessful statuses
If you want the process flow to continue even if the connection response is unsuccessful, toggle this option on
. The default setting is off
.
Step 8 Set the payload wrapping option as appropriate for the data received from the previous step:
This setting determines how the payload that gets pushed should be handled. Available options are summarised below:
Raw
Push the payload exactly as it is pulled - no modifications are made.
First
This setting handles cases where your destination system won't process array objects, but your source system sends everything (even single records) as an array. So, [{customer_1}]
is pushed as {customer_1}
.
Generally, if your process flow is pulling from a source connection but later pushing just a into a destination connection, you should set payload wrapping to first.
When multiple records are pulled, they are written to the payload as an array. If you then strip out a single record to be pushed, that single record will - typically - still be wrapped in an array. Most systems will not accept single records as an array, so we need to 'unwrap' our customer record before it gets pushed.
Wrapped
This setting handles cases where your destination system is expecting a payload to be wrapped in an array, but your payload contains a series of 'unwrapped' objects.
The most likely scenario for this is where you have a complex process flow which is assembling a payload from different routes.
Setting payload wrapping to wrapped will wrap the entire payload as an array object. So, {customer_1},{customer_2}{customer_3}
is pushed as [{customer_1},{customer_2}{customer_3}]
.
Step 9 If required you can set response handling options:
These options are summarised below:
Save response AS payload
Set this option to on
to save the response from the completed operation as a payload for subsequent processing.
POST
PUT
PATCH
DELETE
Save response IN payload
Set this option to on
to save the response from the completed operation IN the payload for subsequent processing.
GET POST
PUT
PATCH
DELETE
Expect an empty response
Set this option to on
if you are happy for the process flow to continue if no response is received from this this request.
POST GET
Step 10 Save your changes.
You can use the assert shape is typically used for testing purposes -you can define a static payload which is used to validate that the current payload (i.e. the payload generated up to the point that the assert shape is encountered) is as expected.
To view/update the settings for an existing assert shape, click the associated 'cog' icon:
This opens the options panel - for example:
To configure an assert shape, all you need to do is paste the required payload and save the shape for example:
An assert shape can only be saved if a payload is present. If you add an assert payload shape but don't have the required payload immediately to hand, you can just enter {}
and save.
To configure a database connection, add a connector shape to your process flow in the normal way and select the required source instance
and source query
for an existing database connector:
Having selected a source instance
, you'll know if you're working with a database connector because the subsequent field requires you to choose a source query
rather than a source endpoint
.
Generally, database connector settings work on the same principles as 'normal' connectors but there are differences, depending on whether you're using a query that receives or sends data.
When a connector shape is configured with a receive
type query (i.e. you're receiving data from a database), you'll see settings sections for variables
, error handling
, and response handling
:
These options are summarised below.
Variables
Custom
Any query variables defined for the selected query are displayed here. Variables may be mandatory (denoted with an asterisk) so a value must be present at runtime, or optional.
Error handling
Retries
Sets the number of retries that will be attempted if a connection can't be made. You can define a value between 0
and 2
. The default setting is 1
.
Response handling
Save response in payload
Set this option to on
to save the response from the completed operation IN the payload for subsequent processing.
When a process flow runs using a receive
type query, received data is returned in a payload. You can view this data in logs - for example:
The number of payloads received is determined by pagination options defined in the query setup, and whether these options are referenced in the associated query.
When a connector shape is configured with a send
type query (i.e. you're sending data to a database), you'll see settings sections for variables
, database
, error handling
, and response handling
:
These options are summarised below.
Variables
Custom
Any query variables defined for the selected query are displayed here. Variables may be mandatory (denoted with an asterisk) so a value must be present at runtime, or optional.
Database
Override query
You can enter your own database query here which will be run instead of the source query already selected.
Before using an override query you should ensure that:
target columns exist in the database
target columns are configured (in the database) to accept null
values.
Items path
If your incoming payload contains a single array of items, enter the field name here. In doing so, the process flow loops through each item in the array and performs multiple inserts as one operation.
Error handling
Retries
Sets the number of retries that will be attempted if a connection can't be made. You can define a value between 0
and 2
. The default setting is 1
.
Response handling
Save response as payload
Set this option to on
to save the response from the completed operation as a payload for subsequent processing.
Save response in payload
Set this option to on
to save the response from the completed operation IN the payload for subsequent processing.
This option saves the response from your database, together with the payload that was sent in.
By default, the response is saved in a field named response
however, when the save response in payload
option is toggled on
, you can specify your preferred field name.
The items path
field is used to run the associated query for multiple items (i.e. database rows) in a single operation, by looping through all items in a given array:
Here, specify the field name in your payload associated with the array to be targeted.
The items path
field can't be used to target individual payload items. For this, you'd need an alternative approach - for example, you might choose to override query
and provide a more specific query to target the required item.
When a process flow runs using a send
type query, the default behaviour is for the response from your database to be returned as the payload. You can view this in logs - for example:
If you want to see the data that was passed in, toggle the save response in payload option to on
.
This is preview documentation for a new feature that will be available in our .
The branch shape allows you to add shapes to multiple, distinct paths in a process flow, which are executed sequentially. Paths are executed one at a time, in the configured sequence, using the same incoming payload.
For complex scenarios, branching means you can split the logic in your flow, so it's easier to build, understand, and manage. For example, a common scenario for syncing orders between two systems is to:
Pull orders from the source system
Check if customer record exists in the destination system
If not, create it
Check if the address exists in the destination system
If not, create it
Decrement stock record in the destination system
Send order info to the destination system
This can all be achieved using one, long process flow however, the branch shape provides the ability to organise the steps logically. So:
Branch 1 creates/updates customer records
Branch 2 creates/updates address records
Branch 3 updates stock records
Branch 4 sends the orders to a warehouse system
The same data (i.e. any payload(s) that hit the branch shape) flows down every branch.
All steps in a branch must be completed before the next branch starts.
If one branch fails, any subsequent branches will not start.
Nested branch shapes are permitted - so you can have a branch shape within a branch.
The maximum number of branches for any branch shape is 6.
If a branch includes a try/catch shape, the catch steps are processed at the end of the flow, not at the end of the branch.
Step 1 In your process flow, add the branch shape in the usual way:
Step 2 The shape is added to your flow with two path stubs - for example:
Step 3
Click the settings
icon for the branch shape:
Step 4 Branch names are shown for the two placeholders:
...you can change these names as appropriate - for example:
...and add more branches if needed:
At this point you can rename and delete branches, but you can't re-sequence them yet. A branch can only be re-sequenced after at least one shape has been added to its path:
Step 5 Once all branches have been added, save changes:
...the branch shape is updated and you'll see a placeholder stub for each branch you added in the previous step:
Step 6 Now you can add shapes to each branch in the normal way. As soon as you add your first shape to a branch, empty shapes are added to the others - for example:
...click on these placeholders to replace them with the required shape from the shapes palette:
Nested branch shapes are allowed - so if required you can have a branch shape within a branch:
There are no limitations on the number of nested levels however, nesting too deeply will have an impact on flow clarity.
Having added branches to a branch shape, you may wish to:
These tasks are completed from branch shape settings:
You can rename a branch at any time - simply overwrite the existing name and save shape settings:
A branch must have at least one shape added to its path before it can be re-sequenced in settings:
If you attempt to re-sequence a branch that has no associated shapes in its path, you'll be prevented with a 'no entry' indicator:
When a branch is deleted, it's removed entirely - any shapes currently defined in its path are also removed. To delete a branch, click the associated delete button:
Shapes are added to branch paths in the normal way. Each path is just a flow that you build by adding shapes from the shapes palette and configuring them as needed.
The Patchworks SFTP connector is used to work with data via files on SFTP servers in process flows. You might work purely in the SFTP environment (for example, copying/moving files between locations), or you might sync data from SFTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an SFTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different SFTP server location
This guide explains the basics of configuring a connection shape with an SFTP connector.
Guidance on this page is for SFTP connections however, they also apply for FTP.
When you the Patchworks SFTP connector from the and then , you'll find that two authentication methods are available:
When you add a connection shape and select an SFTP connector, you will see that two endpoints are available:
Here:
SFTP GET UserPass
is used to retrieve files from the given server (i.e. to receive data)
SFTP PUT UserPass
is used to add/update files on the given server (i.e. to send data)
You may notice that the PUT UserPass
endpoint has a GET
HTTP method - that's because it's not actually used for SFTP. All we're actually doing here is retrieving host information from the connector instance - you'll set the FTP action later in the endpoint configuration, via an ftp command
settings.
Having selected either of the two SFTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
These fields are summarised below:
In this scenario, we can't know the literal name of the file(s) that the SFTP PUT UserPass
endpoint will receive. So, by setting the path
field to {{original_filename}}
, we can refer back to the filename(s) from the previous SFTP connection step.
The {{original_path}}
variable is used to replicate the path from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
The {{current_path}}
variable is used to reference the filename within the current SFTP connection step.
For example, you might want to move existing files to a different SFTP folder. The rename
FTP command is an efficient way to do this - for example:
Here, we're using the FTP rename
command to effectively move files - we're renaming with a different folder location, with current filenames:
rename:store1/completed_orders/{{current_filename}}
The following four lines of code should be added to your script:
Our example is PHP - you should change as needed for your preferred language.
The path in your SFTP connection shape should be set to:
Much of the information above focuses on scenarios where you are working with files between different SFTP locations. However, another approach is to take the data in files from an SFTP server and sync that data into another Patchworks connector.
When a process flow includes a source connection for an SFTP server (using the SFTP GET UserPass
endpoint) and a non-SFTP target connector (for example, Shopify), data in the retrieved file(s) is used as the incoming payload for the target connector.
If multiple files are retrieved from the SFTP server (because the required path in settings for the SFTP connector is defined as a regular expression which matches more than one file), then each matched file is put through subsequent steps in the process flow one at a time, in turn. So, if you retrieve five files from the source SFTP connection, the process flow will run five times.
For information about working with regular expressions, please see the link below:
When a connector is built, default filters can be applied at the API level, so when a process flow pulls data, the payload received is has already been refined.
However, there may be times where you want to apply additional filters to a payload that's been pulled via a - for example, if the API for a connector does not support particular filters that you need.
The filter shape works with a source payload. As such, it should be placed AFTER a in process flows.
When specifying a filter value, the maximum number of characters is 1024.
To view/update the settings for an existing filter shape, click the associated 'cog' icon:
Follow the steps below to configure a filter shape.
Step 1 Select a source integration and endpoint to determine where the incoming payload to be filtered originates.
Step 2 Click the add new filter button:
Step 3 Filter settings are displayed:
From here, you can select a field (from the data schema associated with the source endpoint selected in step 1) - for example:
Alternatively, you can toggle the manual input option to ON and add a manual path.
Step 4 Use the remaining operator, type and value options to define the required filter.
Step 5 Use the keep matching? toggle option to choose how matched records should be treated:
Here:
If the keep matching? option is toggled OFF, matched records are removed from the payload before it moves on down the flow for further processing.
If the keep matching? option is toggled ON, matched records remain in the onward payload, and all non-matching records will be removed.
Step 6 Click the create button to confirm your settings.
Step 7 The filter is added to the filter shape - you can now add more filters if needed:
When defining a filter, you can choose from the following types:
There may be times when you need to define a filter based on incoming data matching one of many given values. Conversely, you might want to define a filter based on incoming data NOT matching one of many given values. This can be achieved using the following operators in string-type filters:
Using these operators, you can specify a comma-separated list of values that a record must have/not have in a string-type field, to be a match.
This information applies wherever filters are available, not just the filter shape.
These operators are designed to work with string-type fields only.
The contains one of many operator is used to match incoming records if the value of a given field DOES match any item from a provided (comma delimited) list of values. For example, consider the following payload of customer records:
Suppose you only want to process customer records with a European country code in the country
field (which is a string
type field).
You can add a filter for the country
field and select the contains one of many
operator - then provide a comma-separated list of acceptable country codes as the value
:
The resulting payload would only include records where the country
field includes one of the specified values - i.e.:
The does not contain one of many operator is used to match incoming records if the value of a given field DOES NOT match any items from a provided (comma delimited) list of values. For example, consider the following payload of customer records:
Suppose you only want to process customer records that do NOT have US or AU in the country
field (which is a string
type field).
You can add a filter for the country
field and select the does not contain one of many
operator - then provide a comma-separated list of unacceptable country codes as the value
:
The resulting payload would only include records where the country
field does NOT include one of the specified values - i.e.:
When defining your 'many values' list as the value
for a contains one of many
or a does not contain one of many
filter, there are a couple of things to keep in mind:
It's important that your 'many values' are specified as a comma-separated list - so in our example:
...will match as required, but:
...will NOT match as required.
Any spaces included in your 'many values' list ARE considered when matching. For example, consider our original payload:
Suppose we are using the contains one of many
operator to match all European countries and the value field is defined as below:
Notice that the final IE
list item is preceded by a space. This means that our filter will only match the IE
country code if it's preceded by a space in the payload, so the output would be:
Our IE
record (line 8 in the payload) isn't matched because there's no space before the country code in the country
field.
The flow control shape can be used for cases where you're pulling lots of records from a source connection, but your target connection needs to receive data in small batches. Two common use cases for this shape are:
A target system can only accept items one at a time
A target system has a maximum number of records that can be added/updated at one time
The flow control shape takes all received items, splits them into batches of your given number, and sends these batches into the target connection.
Step 1 In your process flow, add the flow control shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be split originates - for example:
Step 3 Move down to the batch level field and select that data element that you are putting into batches. For example :
The data structure in this dropdown field is pulled from schema associated with the source. If your data is received from a non-connector source (e.g. manual payload, API, webhook, etc.) then you can toggle ON the manual input option and enter the data path manually).
Step 4 In the batch size field, enter the number of items to be included in each batch. For example:
Step 5 By default, the payload format is auto-detected but you can set a specific format here if you prefer:
Step 6
If you're creating batches of one record, you can toggle ON the Do not wrap single records in an array
option if you want the output to be this:
...rather than this:
Step 7 Save the shape. Now when you run this process flow, data will be split into batches of your given size.
If you check the payload for the flow control step after it has run, you'll see that there's one payload for every batch created. For example:
The Patchworks FTP connector is used to work with data via files on FTP servers in process flows. You might work purely in the FTP environment (for example, copying/moving files between locations), or you might sync data from FTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an FTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different FTP server location
This guide explains the basics of configuring a connection shape with an FTP connector.
When you add a connection shape and select an FTP connector, you will see that two endpoints are available:
Here:
FTP GET
is used to retrieve files from the given server (i.e. to receive data)
FTP PUT
is used to add/update files on the given server (i.e. to send data)
Having selected either of the two FTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
All process flow run jobs are added to and, by default, are picked for processing when a slot becomes available - i.e. they have a regular
priority.
If your process includes a , note that any 'sub flows' do NOT inherit the queue priority from the 'calling' flow. You should set the priority for these as required.
Use this option to .
With the introduction of for process flow runs, all scheduled process flows are added to a queue when they are initialised. This means that the time a flow is initialised is not the same as the time the flow actually runs - sometimes the run will be almost instant, but in busier periods there may be some minutes between starting and running a flow.
Toggle this option on
if you want to remove payloads that would otherwise cause this flow to fail. This is useful if multiple payloads pass through a process flow (for example, if received data is paginated or batched via a shape) and one or more of these includes a data issue.
With this option switched on, the failed payload is removed and the flow continues. If the process flow completes successfully, its is set to partial success
.
Failed payloads can be viewed in , via the . These payloads can be downloaded from :
View and update associated with this process flow. You can remove an existing label, apply labels from the dropdown list, or create a new label.
Use the dropdown list to select a notification group to receive an .
Use this option to .
Define and then reference these values throughout the entire process flow.
All existing versions of a process flow are displayed. From here you can select any version to view the flow at that point in time. You can also choose to , and to .
This option provides the ability to access the response body via . This can be useful for cases where an API returns a successful response despite an error - by inspecting response information from the payload itself, you can determine whether or not a request is successful.
By default, the response is saved in a field named response
- for example:
However, when the save response in payload
option is toggled on
, you can specify your preferred field name - for example:
This option provides the ability to access the response body via .
By default, the response is saved in a field named response
however, when the save response in payload
option is toggled on
, you can specify your preferred field name.
For more information please see .
If you remove the only shape in a branch path, that branch shifts to become the last in sequence. If you go on to add more shapes to this path, make sure that you check branch shape settings and if necessary.
Further information on these authentication methods can be found on our page.
If you're processing files between SFTP server locations, the {{original_filename}}
variable is used to reference filenames from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
This handles cases where you're taking action with files/data processed by a previous which is configured to use the SFTP GET UserPass
endpoint and retrieve files matching a regular expression path
.
This handles cases where you're taking action with files/data processed by a previous which is configured to use the SFTP GET UserPass
endpoint to retrieve files matching a regular expression path
and you want to replicate the source path in the target location.
A fairly common requirement is to create folders on an SFTP server which are named according to the current date. This can be achieved using a , as summarised below.
The data
object in the contains three items: payload
, meta
, and variables
.
Our creates a timestamp, puts it in to the meta
, and then puts the meta
into the data
.
The SFTP shape always checks if there is an original_filename
key in the meta
and if one exists, this is used.
This opens the - for example:
The manual data path field supports .
Presentation of the value field is dependant upon your selected . For example, if the type field is set to specific date, you can pick a date for the value:
Available are detailed below. When defining a value, you can include , , and variables.
Don't forget that when a payload is running you can - this is a great way to check that your filter is refining data as expected.
For information about these fields please see our page - details are the same.
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a pending
folder:
The regular expression is explained below:
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
Our aim is to copy files retrieved from an FTP location in the first connection step, to a second FTP location, using the same folder structure as the source.
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a store1
folder:
The path is added as a regular expression, explained below:
FTP command
A valid FTP command is expected at the start of this field (e.g. get, put, rename, etc.). If required, qualifying path/filename information can follow a given command. For example:
rename:/orders/store1/processed/{{current_filename}}
Root
This field is only needed if you are specifying a regular expression in the subsequent path
field.
If you are NOT defining the path
field as a regular expression, the root
field isn't important - you can leave it set to /
.
If you are ARE defining the path
field as a regular expression, enter a root path that reflects the expected file location as closely as possible - this will optimise performance for expression matching.
For example, suppose the files that we want to process are in the following SFTP folder:
/orders/store/year/pending
and that our specified path
contains a regular expression to retrieve all files for store 1
for the current day in 2023. In this case our root
would be defined as:
orders/store1/2023/pending
In this way, any regular expression to match for the path
will start in the relevant (2023
)folder - rather than checking folders and subfolders for all stores and all years.
Path
If the name of the file that you want to target is static and known, enter the full path to it here - for example:
store1/2023/pending/20230728.json
If the name is variable and therefore unknown, you can specify a regular expression as the path
. In this case, you enter the required regular expression here, and ensure that the root
field contains a path to the relevant folder (see above).
Original filename
This field is not currently used. For information about working with original filenames please see the Using an {{original_filename}} variable section below.
Original path
This field is not currently used. For information about working with original paths please see the Using an {{original_path}} variable section below.
String
A text string - for example : blue
Note that when a string
type is selected, you can choose the regex operator and define a regex value. This provides greater flexibility if you can't achieve the desired results using standard operators. preg_match
is used for pattern matching. For an example, see Using regex for string-type filters.
You can also match a string
-type field according to whether or not the value is one of many items in a given list. For more information, see Using contains one of many or does not contain one of many for string filters.
String length
Number
A number - for example: 2
Specific date
A day, month and year, selected from a date picker.
Dynamic date
Specify a date/time which is relative to a +/- number of units (seconds, minutes, hours, days, months, years). For example:
It's important to be aware that relative date/time variables are affected by the use queued at time process flow setting.
Boolean
Null comparison
A field is null
if it has no defined value. Choose whether the give data path is null
or not null
. For example:
Variable
Designed specifically for cases where you are comparing a variable value as the filter comparison.
When selected, a value type
field is displayed and the expected type for your variable can be selected. This ensures that a true comparison can be made. Note that if you are filtering on multiple variables, the value type
field should be set to None
.
This will return the correct payload - i.e. any records where peaches AND apples are present:
Typically, a process flow run is triggered and a request for data is made via a connector shape - if the request is successful, data is retrieved and the flow continues.
However, there may be scenarios where you need to control whether the connector shape or process flow run should fail or continue based on information returned from the connection request. To achieve this, you can apply a response script to your connector shape.
When a response script is applied to a connector shape, the script runs every time a connection is attempted. The script receives the response code, headers, and body from the request and - utilising response_code
actions - returns a value determining whether the connector shape/flow run continues or stops.
Response scripts are just like any other custom script, except they receive additional information from the request - see lines 11 to 14 in the example below:
When a response script is applied, the existing schema/data path defined for the associated endpoint is bypassed. If data is modified by the script, it is returned in its modified state. If the script does not modify data, the data is returned in its original format. You should consider this in any subsequent shapes where the schema is used - for example: map, filter, flow control, route.
If you use a response script on an endpoint that modifies data and you are reliant on that data to resolve variables (e.g. for pagination) you should ensure that such dependencies are not compromised by your modifications.
To implement a response script, you should:
Response scripts are written and deployed in the usual way, via the custom scripts option. However, two additional options can be used for scripts that you intend to apply via connector shapes: response_code and message.
These options are only valid when the script is applied to a connector step as a response script
.
The response_code
determines how the process flow behaves if a connection request fails. Supported response_code values are:
0
Continue
1
Fail the connector step and retry. The connector step is marked as failed and the queue will attempt it again.
2
Fail the process flow and queue it to retry. The process flow is marked as failed and queued for a retry.
3
Fail the process flow and do not retry.
4
Force the connector to re-authenticate and retry the request.
The message
is optional. If supplied, it is output in the run logs.
To apply your response script, access settings for the required connector shape and select your script from the response script dropdown field.
Here we handle the scenario where a connection response appears OK because the status
code received is 200
, but in fact the response body
includes a string (Invalid session
) which contradicts this. So, when this string is found in the response body, we want to retry the process flow.
In this case, we return a response_code
of 2
with an message
of Invalid session
:
Here we show how the payload
received from a connection request is checked for an order number and an order status - retrying the process flow if a particular order status is found:
User pass
The instance is authenticated by providing a username and password for the SFTP server.
Key pass
The instance is authenticated by providing a private key (RSA .pem
format) for the SFTP server.
You can use the manual payload shape to define a static payload to be used for onward processing. For example, you might define an email template that gets pushed into an email connection, or you might want to test a process flow for a connector that's currently being built by your development team.
The maximum number of characters for a single payload is 100k. Anything larger than this may cause the process flow to fail.
Any text-based data format is supported (JSON, XML, CSV, plain)) however, keep in mind that subsequent shapes in the flow may only support JSON and XML.
To view/update the settings for an existing manual payload shape, click the associated 'cog' icon:
This opens the options panel - for example:
To configure a manual payload shape, all you need to do is paste the required payload and save the shape for example:
A manual payload shape can only be saved if a payload is present. If you add a manual payload shape but don't have the required payload immediately to hand, you can just enter {}
and save.
Mappings are at the heart of Patchworks.
When we pull data from one system and push it into another, it’s unlikely that the two will have a like-for-like data structure. By creating mappings between source and target data fields, the Patchworks engine knows how to transform incoming data as needed to update the target system.
The illustration below helps to visualise this:
In process flows, the map shape is used to define how data fields pulled from one connector correlate with data fields in another connector, and whether any data transformations are required to achieve this.
If your organisation has in-house development expertise and complex transformation requirements, you can use our custom scripts feature to code your own scripts for use with field mappings.
The map shape includes everything that you need to map data fields between two connections in a process flow. When you start to create mappings, there are two approaches to consider:
Having added a map shape to a process flow, click the associated 'cog' icon to access settings:
For more information about working with these settings, please see our Working with field mappings page.
The generate automatic mapping option is used to auto-generate mappings between your selected source and target connections.
All Patchworks prebuilt connectors (i.e. connectors installed from the Patchworks marketplace) adopt a standardised taxonomy for tagging common fields found in data schemas for a range of entity types (customers, orders, refunds, products, fulfillments, etc.). So, if your process flow includes connections to sync data between two prebuilt connectors, it's highly likely that auto-generating mappings will complete a lot of the work for you.
Once auto-generation is complete, mapping rows are added for all fields found in the source data - for example:
Where matching tags are found, the mapping rows will include both source and target fields (you can adjust these manually and/or apply transformations, as needed).
Any fields found in the source data which could not be matched by tag are displayed in partial mapping rows, ready for you to add a target manually.
For more information about using the generate automatic mapping feature, please see our Working with field mappings page.
If your process flow includes custom connections (i.e. connectors that have been built by your organisation, using the Patchworks connector builder), you can still use the generate automatic mapping option. The success of this will depend on whether field tagging was applied to your connector during the build:
If yes, your custom connector will behave like any of our prebuilt connectors when it comes to auto-generated mappings, adding fully mapped rows for all matched tags.
If no, Patchworks won't be able to match any source fields to a target automatically - partial mapping rows are added for all source fields found, ready for you to add a target manually.
It's very easy to add individual mapping rows manually, using the add mapping rule option:
We recommend that you always try the automatic mapping option first and then manually add any extra rows if needed. However, there's no reason that you couldn't add all of your mappings manually if preferred.
For more information about adding mappings manually, please see our Working with field mappings page.
The array join transform function is used to join elements of an array as a string, with a user-defined delimiter. For example, you might have product data in an array:
...and need to convert these items to a string before pushing to a single destination field (with each item in the string delimited by a character of your choice):
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select join from the array category:
Step 5 In the delimiter field, enter the character that you want to use to delimit each array field in the string:
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
If you're building multiple process flows with similar requirements for field mappings, you can the configuration for a map shape, and then that configuration into another map shape.
When a map shape configuration is exported, a JSON file is generated and saved (automatically) to the default download folder for your browser. All and associated are exported. You can then import this file to any other map shape within:
The same process flow
Other process flows for your company profile
Other process flows for any of your linked company profiles
To export the configuration for a map shape, follow the steps below:
Step 1 Access the required process flow, then click the settings icon for the map shape that you want to export:
Step 2 Click the export map button:
Step 3
The configuration is exported and saved to your default downloads folder. The filename is always map.json
.
To import a mapping configuration into a map shape, follow the steps below:
Step 1 Access the required process flow, then click the settings icon for the map shape that you want to update:
You can import a mapping configuration into a new map shape, or into an existing one. If you import a configuration into an existing map shape, any existing mappings will be overwritten.
Step 2 Click the import map button:
Step 3 Navigate to the downloaded map configuration file on your local drive, then select it.
The default filename for exported map configuration files is map.json
.
Step 4 Having selected a valid mapping configuration file to import, the import completes immediately.
The cast to string transform function is used to change the data type associated with a source field from number
to string
. For example, you might have an id
field in a source system that's stored as string
value, but your destination system expects the id
to be a number
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to string from the number category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The format transform function is used to change a date value to a different format. For example:
...might be changed to:
A range of predefined date formats is available for selection, or you can set your own .
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select format from the date category:
Step 5 Click in the format field to select a predefined date format that incoming dates should be converted to:
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The following characters are commonly used to specify days in custom format dates.
The following characters are commonly used to specify months in custom format dates.
The following characters are commonly used to specify years in custom format dates.
The following characters are commonly used to specify times in custom format dates.
Unix Epoch dates must be received as a number, not a string - i.e.:
1701734400
rather than "1701734400"
When specifying a path to a given folder in this way, you don't need a /
at the start or at the end.
A number which represents the expected string length for the received payload. Here, the 'payload' might refer to a targeted field within the incoming payload, or the entire payload.
For example, if you want to ensure that an objectId
field is never empty, you would define a filter for objectId
greater than
0
:
In this case, toggling the keep matching
option to OFF means that the ongoing payload will include only items where this field is not empty. Conversely, toggle this option on if you want to pass on empty payloads for any reason.
You can use the same principle to check for empty payloads (as opposed to a specific field). In this case you would define a filter for *
greater than
0
:
A true or false value. For example, if you only want to consider items where an itemRef
field is set to true
, you would define a filter for itemRef
equals
true
:
Internally, the format transform function uses Laravel's date format methods, which in turn call PHP date format methods. Commonly used format specifiers are listed below - full details are available in .
If your Unix dates are provided as strings, you should convert these to numbers. To achieve this, add a transform for the date field BEFORE the date format transform function.
For a list of commonly used date specifiers that can be used in custom formats, see .
Cast to boolean
Convert a number value to a boolean (true/false) value based on PHP logic.
Change the source field data type from number
to string
.
Ceiling
Round up to the nearest whole number.
Apply a static number.
Floor
Round down to the nearest whole number.
Make negative
Convert number to a negative.
Make positive
Convert number to a positive.
Perform a mathematical operation for selected fields.
Change the number of decimal places.
Define override values for conversion to true/false.
Change the source field data type from boolean
to string
.
Reference a value from cached data.
Convert weight
Convert a specified weight unit to a given alternative.
Apply a true or false value.
Fallback
Set a default value to be used if the given input is empty. Blank values are supported.
Map
Convert values using a cross-reference lookup.
Convert a null value to an empty string.
Convert a null value to zero (0).
Convert a source value to null.
Apply a field-level custom script. Note that a script will time out if it runs for more than 120 seconds.
Change the source field data type from string
to float
.
Change the source field data type from string
to number
.
Join selected fields with a selected character.
Specify a comma separated list of field values to be matched for inclusion.
Convert string to boolean
Convert a string value to a boolean (true/false) value based on PHP logic.
Country code
Apply country codes of a selected type (Alpha 1, Alpha 2, Numeric). Note that this transform will cause the map step to fail if an empty value is received.
Country name
Return the country name for a country code. Note that this transform will cause the map step to fail if an empty value is received.
Apply static text or reference variables.
Specify a comma separated list of field values to be matched for exclusion.
Get the first word from a string.
Hash
Convert a string to a SHA1 Hash.
Encode data into JSON format. Note that although this is listed as a string
type transform, any data type is supported.
Get the last word from a string.
Limit
Truncates a string to a given length.
Lowercase
Convert to lowercase.
Pad an existing string of characters to a given length, using a given character.
Prefix
Add a string to the beginning of a field.
Replace any given character(s) with another character.
Split elements of a string into an array.
Substring
Return a given number of characters from a given start.
Suffix
Add a string to the end of a field.
Trim whitespace
Remove any characters around a string.
Uppercase
Convert to uppercase.
URL decode
Convert an encoded URL into a readable format.
URL encode
Convert a string to a URL encoded string.
d
Day of the month with leading zeros (01 to 31).
j
Day of the month without leading zeros (1 to 31).
D
A textual representation of a day in three letters (Mon to Sun)
l
A full textual representation of the day of the week (Monday to Sunday)
m
Numeric representation of a month with leading zeros (01 to 12).
n
Numeric representation of a month without leading zeros (1 to 12).
M
A textual representation of a month in three letters (Jan to Dec)
F
A full textual representation of a month (January to December).
Y
Four-digit representation of the year (e.g. 2023).
y
Two-digit representation of the year (e.g. 23).
H
Hour in 24-hour format with leading zeros (00 to 23).
i
Minutes with leading zeros (00 to 59).
s
Seconds with leading zeros (00 to 59).
a
Lowercase Ante meridiem (am) or Post meridiem (pm).
A
Uppercase Ante meridiem (AM) or Post meridiem (PM).
Join elements of an array as a string, based on a defined delimiter.
Apply the date that a process flow runs, with or without adjustments.
Apply a static date.
Convert a date to a predefined or custom format.
Round a date up/down to the start/end of the day.
Time now
Returns the current date and time in your required format.
Timezone
Convert dates to a selected timezone.
The cache lookup transform function is used to lookup and use data from a cache created earlier in the flow.
If caches have been added to your current process flow or company-level caches have been added for use in any other process flows, you can reference these in field mapping transformations.
For details please see Referencing a cache in mapping transformations in our cache section.
The round date transform function is used to round source dates to either the start or end of the day, where:
start of day
changes the time to 00:00
for the received date
end of day
changes the time to 23:59
for the received date
So, you can round a given source date before sending the rounded value into a given target field.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select round date:
Step 5 Choose your required rounding:
Step 6 Accept your changes and save the transformation - at this point your mapping row is displayed without a target. From here, you can go ahead and add a target field:
The custom number transform function is used to map a given number to a target field.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when a custom number transformation is used we don't select a source field - the custom number transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom number:
Step 5 Move down to the custom number field and enter your required number - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom number will be mapped to the given target field.
The custom dynamic date transform function is used to set a target field to the current date and time, based on the date and time that the process flow runs. You can also define rounding and adjustments.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom dynamic date:
Step 5 Optionally, you can add adjustment settings - for example:
These options are summarised below:
Round
Select start of day
to change the time to 00:00:00
for the date the process flow is run.
Select end of day
to change the time to 23:59:59
for the date the process flow is run .
Suppose that the process flow runs at the following date/time: 2023-08-10 10:30:0
If set to start of day
, the transformed value would be 2023-08-10 00:00:00
.
If set to end of day
, the transformed value would be 2023-08-10 23:59:59
.
Units
If you want to adjust the date/time, select the required unit - choose from second
, minute
, hour
, day
, week
, month
or year
.
Suppose that the process flow runs at the following date/time: 2023-08-10 10:30:0 and you want to adjust it by 1 day.
In this case, you would select day
as the unit
and specify 1
as the adjustment
value. However, if you wanted to adjust by 1.5 days, you would set the unit
to hour
and specify 36 as the adjustment
value.
Adjustment
Having selected an adjustment unit, enter the required number of that unit here.
See units
examples above.
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom dynamic date will be mapped to the given target field.
The round number transform function is used to change the number of decimal places for a number value. For example:
...might be changed to:
With the round number transform you can specify the number of decimal places that should be applied to incoming numeric values.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select round number from the number category:
Step 5 Move to the decimal places field and enter the required number of decimal places required for transformed values - for example:
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
When you add a process flow, you're given a new, blank canvas to start building your data flow using automated shapes.
A process flow can be as simple or as complex as you need, and you can add as many as you like - maybe one flow will fulfil all of your requirements, or maybe you'll need several to achieve different things.
To add a new process flow, follow the steps below.
Step 1 Log in to the Patchworks dashboard, then select process flows > process flows to access the manage process flows page.
Step 2 Click the create new flow button:
Step 3 Enter a name for this flow, then click the next step button.
It's advisable to use a name that reflects the aim of this process flow. Names must be three characters or more.
Step 4 Version 1 of the new process flow is created and displayed on the process flow canvas - for example:
This is a draft version which you can use to build your data flow using automated shapes. Once you're satisfied that your process flow is working as required, you can deploy it for use. For more information about this process please see our Process flow versioning page.
All process flows must begin with a trigger - what event is going to trigger this flow to run? For this reason, all new process flows are created with a trigger shape already in place. You should edit this shape to apply your required settings (for example, to set a schedule).
Patchworks process flows are incredibly flexible. With a range of shapes for receiving, paginating, manipulating, batching, splitting, caching and sending data, you can build highly complex flows in a matter of a minutes.
With this in mind, it's important to understand how data flows through shapes.
In the simplest of scenarios, a process flow receives a single payload of unpaginated data and this flows all the way through to completion with no manipulation or batching - one payload is received, processed, and completed.
However, if your incoming data is paginated and/or you introduce shapes capable of generating multiple payloads, it's important to understand how these pass through the flow. Essentially, any payloads that a shape outputs are added to a 'bucket' and it's that bucket that is then passed to the next shape.
So, all payloads from one shape are passed to the next shape in the same context - they don't pass down the entire flow individually.
If a 'pull' connection shape is configured to use an endpoint that paginates data, the connection shape outputs each page in its own payload.
The animation below shows how this works.
All shapes (except the connection shape) have a set timeout of 30 minutes. If processing is not completed within this time, the shape fails.
The timeout for a connection shape is configurable via connection shape settings.
With some exceptions (detailed below), a further three attempts will be made if a process flow shape fails. Exceptions are summarised below:
1
2
1
1
1
The null to string transform function is used to convert incoming null
values to an empty string. For example:
...is converted to:
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select null to zero from the other category:
Step 5 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The cast boolean to string transform function is used to change the data type associated with a source field from boolean
to string
.
A boolean
data type can have only two possible states: true
or false
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast boolean to string from the other category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom boolean transform function is used to map a value of true
or false
to a target field.
If you've added/updated a before, you'll be used to selecting a source field and a target field. However, when a custom boolean transformation is used we don't select a source field - the custom boolean transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom boolean:
Step 5 Move down to the value field and select your required true/false value - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the selected custom boolean value will be mapped to the given target field.
The null value transform function is used to replace the value of a source field with null.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select null value from the other category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The script transform function is used to apply an existing to the source value, and the updated field value is pushed to the target field.
Make sure that the required script has been before applying it as a transform function.
Any payloads passed in and out of a script transform are verbatim - there is no JSON encoding.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select script:
Step 5 Click in the script version field and select the script/version that you want to apply for this field transformation - for example:
All scripts and versions are available for selection. If you choose a script/version that is not currently deployed, it will be deployed automatically.
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select/update the target field and then the mapping in the usual way.
The math transform function is used to perform a mathematical operation for selected fields. For example, your incoming payload might include customer records, each with a series of numeric value
fields that need to be added together so the total can be pushed to a total
field in the target system.
The following mathematical operations are available:
Add
Subtract
Multiply
Divide
In the instructions below, we'll step through the scenario mentioned above where our incoming payload includes customer records, each with three value fields (value1
, value2
, value3
) that must be added together and pushed to a total
field in the target system.
The steps required are detailed in two stages:
To begin, we need to update/add the required mapping row so that it includes all source fields that need to be added together and then pushed to the target.
Step 1 In your process flow, access settings for your map shape:
Step 2
Find (or add) the mapping row which requires a math transformation. In the example below, we have a row that's currently set to map the source first name
field into the destination full name
field:
Step 3 On the source side of the mapping row, we need to include all the fields to be used in our mathematical operation. To do this, click the 'pencil' icon associated with the existing source field:
Step 4 Details for the selected field are shown - click the add source field button:
Step 5 Click the 'pencil' icon associated with the new source field:
Step 6 Move down and update the display name and payload fields for the second source field that you want to use - for example:
Step 7 Accept these changes to exit back to your mapping rows - notice that there are now two source fields associated with the row you updated:
Step 8 Repeat steps 3 to 7 to add any more source fields that you need to include in the mathematical operation.
With all required source fields defined for our mapping row, we can add a math transform function to define the required calculation based on these fields.
Step 1 Select the add transform button for the required mapping rule - for example:
Step 2 Click the add transform button:
Step 3 Click in the name field and select math from the number section in the list of transform functions:
...math options are displayed:
Step 4 Click in the operator field and select the type of calculation to be performed - you can choose from add, subtract, multiply and divide:
Step 5 Click the add field button:
Step 6 Click in source fields and select the first source field to be used in the calculation:
Step 7 Accept your changes.
Step 8 Click the add field button again:
...and add the next source field to be used - for example:
Step 9 Accept your changes.
Step 10 Repeat steps 8 and 9 to include any more source fields to be used in the calculation. Each time you accept a new source field you'll see the sequence that they will be processed when this transform function runs - for example:
Fields are processed in the sequence that they are added here.
Step 11 Having added all required source fields to be calculated, accept changes:
...then save the function:
Step 12 Ensure that the target field for this mapping row is set as required, then save the map shape. Next time the process flow runs, the mathematical operation will be performed for the given source fields and the total value is pushed to the defined target field. The example below shows an incoming payload before and after the math transformation is applied:
The cast to float transform function is used to change the data type associated with a source field from string
to float
.
A float is a type of number which uses a floating point to represent a decimal or fractional value. They are typically used for very large or very small values where there are a lot of numbers after the point - for example: 5.3333 or 0.0001.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to float from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The pad transform function is used to pad an existing string of characters to a given length, using a given character. You can apply padding to the left (i.e. immediately before the existing string), to the right (i.e. immediately after the existing string), or both (immediately before and after, as equally as possible).
The payload item below contains a string that's 8 characters long:
If we apply padding to a length of 20
using a *
character to the right
, the result would be:
Here, we have an extra 12 * characters to the right, giving a string length of 20. However, if we apply the *
character to both
, the result would be:
Now the padding is applied with 6 characters to the left of the original string and 6 characters to the right.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select pad from the string section:
Step 5
Click in the direction
field and select where you would like padding to be applied:
You can apply padding to the left
(i.e. immediately before the existing string), to the right
(i.e. immediately after the existing string), or both
(immediately before and after, as equally as possible).
Step 6
In the length
field, specify the number of characters that you'd like the final (i.e. transformed) string to be - for example:
Step 7
In the pad character
field, specify the character that you'd like to use for padding - for example:
If you want padding to be applied with spaces, press the space bar once in this field.
Step 8 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform.
Field transformations can be defined to change the value of a data field pulled from a source system before it is sent to its target. A transformation is comprised of one or more .
This page explains how to for a field mapping, and .
For a summary of available transform functions please see the section.
For information about adding a field transformation using a cross-reference lookup table, please see our section.
To add a new transformation for a field mapping, you start by adding a new transformation and then build the required functions. To do this, follow the steps below:
Step 1 Access the required process flow and then edit the map shape to be updated with a transform:
Step 2 Find the mapping that you want to update, then click the transform icon (between source and target elements). For our example, we're going to add a prefix to the 'id' field:
Step 3 Click the add transform button:
Step 4 Use the select a function field to choose the type of function that you need to use (functions are organised by type):
Step 5 Depending on the type of function you select, additional fields are displayed for you to complete. Update these as required - for our example we're adding some text to be added as a prefix:
Step 6 Now we need to confirm which source field this transform field should be applied for - click the add field button:
Step 7 Select the required field:
In straightforward scenarios, this will typically be the same source field as defined for the mapping row. However, more complex scenarios may prompt multiple options here - for example, if you apply multiple transforms to the same mapping.
Step 8 Accept your changes.
Step 9 Add more fields if necessary.
Step 10 When you're satisfied that all required fields have been added, accept changes and then save the shape settings.
The custom static date transform function is used to set a target field to a given date and time.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom static date:
Step 5 Click anywhere in the date field, or click the calendar icon, to open a date picker:
Step 6 Set the required date and time.
Step 7 Accept your changes:
...then save the transformation:
Step 8 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 9 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom static date will be mapped to the given target field.
The custom string transform function is used to map a given string to a target field. This string can be static, or you can reference and .
If you've added/updated a before, you'll be used to selecting a source field and a target field. However, when a custom string transformation is used we don't select a source field - the custom string transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom string:
Step 5 Move down to the custom string field and enter your required text or variables - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom string (or associated values from variables) will be mapped to the given target field.
The JSON encode transform function is encode incoming values as JSON. For example, you might have product data in a string:
...and need to encode the values for pushing to the destination system:
Although this is listed as a string type transform, in fact any data type can be encoded.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select JSON encode from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes:
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The replace transform function is used to replace an existing source string value with either:
An alternative string value
An empty value
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select replace from the string category:
Step 5 Update search and replace fields with your required values:
For the replace field, you can enter another string or leave the field blank to replace the source with an empty value.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The split string transform function is used to split elements of a string into an array, with a user-defined delimiter. For example, you might have product data in a string:
...and need to convert these items to an array before pushing to the destination system:
In this case, items in our source string value are delimited with a comma, so we can use this to determine where each split occurs. The transform checks incoming string values and determines each array item to be the word after each comma.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select split from the string category:
Step 5 In the delimiter field, enter the character that delimits elements in the string:
If you use any of the following characters, they should be escaped:
.
+
*
?
^
$
(
)
[
]
{
}
|
\
/
For example, a delimiter of *
would be entered as:
\*
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The cast to number transform function is used to change the data type associated with a source field from string
to number
. For example, you might have an id
field in a source system that's stored as number
value, but your destination system expects the id
to be a string
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to number from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
This page provides guidance on using the to configure field mappings between two .
Step 1 Click the source endpoint option:
...source and target selection fields are displayed:
Step 2 Use source and target selection fields to choose the required connector instance and associated endpoints to be mapped - for example:
Step 3 Click the generate automatic mapping button:
...when prompted to confirm this operation, click generate mapping:
As we're configuring a new map shape, there's no danger that we would overwrite existing mappings. However, always use this option with caution if you're working with an existing map shape - any existing mapping rules are overwritten when you choose to generate automatic mappings.
If you need to access the generate automatic option for an existing mapping shape, you need to click into the source and target details first.
Step 4 Patchworks attempts to apply mappings between your given source and target automatically. A mapping rule is added for each source data field and, where possible, a matched target field - for example:
From here you can refine mappings as needed. You can:
Step 5
Toggle wrap input payload
and wrap output payload
options ON/OFF as required.
Where:
wrap input payload
ON. Wraps the incoming payload in an array [ ]
ONLY for processing within the map shape.
wrap output payload
ON. Wraps the outgoing payload in an array [ ]
ONLY for onward processing.
Click options below for payload examples showing how these options work in practice:
Step 6 Save changes.
You can add as many new mapping rules as required to map data between source and target connections.
There may be times where you don't want to (or can't) use the payload fields dropdown select a field from your source/target data schema. In this case, you simply select the manual input field and enter the full schema path for the required field.
You can change the display name and/or the field associated with the source or target for any mapping rule.
If required, you can map a source field to multiple target fields - for example, you might need to send a customer order number into two (or more) target fields.
Sometimes it can be useful to map multiple source fields to a single target field. For example, you might have a target connection which expects a single field for 'full name', but a source connection with one field for 'first name' and another field for 'surname'.
When you choose to delete a mapping rule, it's removed from the list immediately. However, the deletion is not permanent until you choose to save the mapping shape.
In our example, our source data is coming in via a manual payload so are defining the payload field manually - if you're using a to receive data, you'll be able to select the required field from the associated schema for your connection.
Step 9 Go to .
All source fields that were added for this mapping in will be available for selection here.
For more information about referencing flow variables in a custom string, please see our page. For more information about referencing cached data in a custom string, please see our page.
Any instances defined for your company profile are available to select as the source or target. If you aren't using a connector to retrieve data (for example, you are sending in data via the Inbound API or a webhook), you won't select a source endpoint - instead, use the override source format dropdown field to select the format of your incoming data:
If you've used the option to generate an initial set of mappings, you may find that some source fields could not be auto-mapped. In these cases, a mapping rule is added for each un-mapped source field, so you can either add the required destination or .
In this case, you would define mappings for the required source and target fields, then to the two source fields.
You need to map an array within a payload but also include one of the 'parent' fields - for example, map the following:
...to:
To achieve this, the target field must be defined with double *.
characters in the data path. For example:
If you don't do this, and just enter the standard single *.
characters - for example:
....the output will be a flat object - for example:
The run process flow shape is used to call one process flow from another, so you can run process flows in a chain. For example, you might have a process flow that receives data from a webhook, applies filters and then hits a run process flow shape to call another flow with that data.
The default behaviour is for the payload from the end of the calling process flow to be sent into the called process flow for onward processing. However, when configuring a run process flow shape you can add a manual payload - in this case, your manual payload will be sent into the called process flow.
The run process flow shape also allows you to choose whether any variables associated with the called process flow should be applied.
A called process flow will only run if it is enabled.
A called process flow is always added to your run queue for processing - even if the parent flow is triggered manually.
A called process flow does NOT inherit the queue priority of its parent - you should set the priority of these process flows individually.
If you don't configure a manual payload in the run process flow shape, the final payload from the calling process flow is always sent into the called process flow.
If multiple payloads are passed into the run process flow shape, the called process flow will run once for each payload - these runs take place in parallel.
When a payload is passed to a 'child' process flow, meta variables are included.
You cannot create a recursive process flow loop - for example, if Process Flow A calls Process Flow B, you cannot then call Process Flow A from Process Flow B.
Step 1 In your process flow, add the run process flow shape in the usual way:
Step 2 Click in the flow field and select the process flow that you want to run:
If you have a lot of process flows, you can search for the one you want to use here.
If you select a process flow that is not enabled, an error will be displayed when you attempt to save these settings. In this case, you should access the process flow you want to call and enable it, then come back to save this shape.
Step 3 Move down to the settings section and choose which version of the selected process flow to call:
Available options are summarised below:
Latest deployed version
Latest draft version
Specific version
Select a version from the dropdown list. With this approach, keep in mind that this version is always called - do if you update this process flow subsequently, you will NOT be running the latest draft
or deployed
version.
Step 4 If your selected process flow is associated with any process variables, these are shown - you can choose to enable or disable these:
Step 5 If you want to pass a manual payload into this process flow, toggle the specify payload manually option ON and paste the required payload into the supplied payload field:
The manual payload can be any format - JSON, XML, plain text, etc.
Step 6 Save the shape. The configured shape is added to the canvas with the sub-flow available as a link - for example:
Click this link to open the sub-flow in a new browser tab.
The cast to boolean transform function is used to convert given values to true
/false
based on PHP logic, but with the option to define overrides for specific input values. For example, consider the following payload:
Suppose we want to define a mapping for the fruit_included
item (line 2) but convert the numeric value to a true
/false
(boolean) value. Without an override setting, transforming the number to a boolean value would result in the following payload:
This is because standard PHP logic determines that 0
equates to a boolean false
value. But in this specific case, a quirk in our source data is such that the 0
value actually means true
- hence a list of fruits follows.
So, we need a way to specify an override for this field, where 0
equates to true
. We can do this via the other > cast to boolean transform function, thereby achieving the desired result:
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to boolean from the other category:
Step 5
In the true values (override)
and/or false values (override)
fields, enter specific values that should override standard PHP logic:
Multiple override values can be entered - use a comma to separate each one.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The set variables shape is used to set values for flow variables and/or metadata variables at any point in a flow.
When defining variable values you can use:
static text (e.g. blue-003
)
payload syntax (e.g. [[payload.productColour]]
)
flow variable syntax (e.g..{{flow.variables.productColour}}
)
Step 1 In your process flow, add the set variables shape in the usual way:
Step 2 Access shape settings:
Step 3 Click the add new variable option associated with the type of variable that you want to define - for example:
Step 4 Options are displayed for you to define the required variables and their values. How these are displayed depends on the type of variable you've chosen to add:
Step 5 Once variables are accepted they're added to the settings panel (you can edit/delete as needed):
Step 6 Save the shape.
The notify shape is used to create custom notification messages for output to run logs and email messages.
To achieve this, you compose a notification template
within the shape settings using any combination of static text and variables. When the process flow runs and hits this shape, the notification message is generated from your defined template and is then:
Output to the run logs AND/OR
Emailed to recipients in selected notification groups
Notification templates can include dynamic content from variables.
Email notifications are sent irrespective of whether a process flow is enabled and deployed.
The maximum number of email notifications that can be sent (across all process flows for your company profile) is determined by your Core subscription tier. If you manage multiple linked profiles, each one of these will have its own allowance.
Notification templates can include dynamic content from payload, flow, and meta variables.
Variables return the first 100 characters of the associated content.
In the example below, we use two flow variables to retrieve a store name (store_name
) and a team name (query_team
), and a payload variable (our_id
) to retrieve required information from the payload:
Step 1 In your process flow, add the notify shape in the usual way:
Step 2 Select an alert level from the dropdown field:
The selection made here determines how this notification is displayed in logs and email messages:
Success
Green
Info
Grey
Warning
Orange
Error
Red
Email notifications always includealert level
as a status
, which can be useful if you want to define mailbox filters based on the alert level. An example message is shown below:
Step 3 Choose notification channel(s) to be used - i.e. how these notifications should be communicated:
Available options are summarised below:
Email +Log
The defined notification message is sent to any specified email notification groups AND output to run logs.
The defined notification message is sent to any specified email notification groups. It is NOT output to run logs.
Log
The defined notification message is output to run logs. Email notification groups are not available for selection (so no emails are sent).
Step 4
This field is not displayed if the channel is set to log in the previous step.
The email limit determines the maximum number of emails that the notify shape can send per flow run.:
For example, if you select a notification group that contains 20 recipients and set the email limit to 10, the first 10 recipients will receive emails and the remaining 10 in the group will NOT receive emails.
Step 5
This field is not displayed if the channel is set to log in the previous step.
If you want to send this notification to email recipients, select the required notification group:
All defined notification groups are available for selection. If you need to add a new group or check recipients in an existing group, please check our Notification groups page.
Remember that the email limit defined in the previous step may limit the number of recipients to receive emails. It's a good idea to check how many recipients are in your selected notification group, and that your email limit is set appropriately.
Step 6 Add your required notification text/variables to the notification template section:
Remember that a notification message can include static text and (using variables) dynamic content.
If you need more space, you can drag this field further down using the handlebar in the bottom right corner:
Step 7 Save the shape.
The split shape is used to split out a given payload element. When data is split, the specified element (including any nested elements) is extracted for onward processing.
For example, your process flow might receive customer data from a source connection, but you need to send address details to a different endpoint. In this case, you'd use the route shape to create two different routes, mapping just customer data down one, and splitting out addresses for the other.
Step 1 In your process flow, add the split shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be split originates:
Step 3 Move down to the level to split section and use the dropdown data path to select the required data element to split - for example:
Remember - any data (including nested data) within the selected element will be split out into a new payload.
Step 4 If required, you can add a wrapper key. This wraps the entire payload in an element of the given name - for example:
...would wrap the payload as shown below:
Step 5 Save the shape.
The route shape is used for cases where a process flow needs to split into multiple paths, based on a given set of conditions. Conditions are defined based on any fields found in the schema associated with your source data, so the scope for using routes is huge.
To define multiple routes for your process flow you must:
Add a route shape.
Configure the route shape to add required routes and conditions.
Build the flow for each configured route by add shapes in the usual way.
By default, multiple routes are processed in parallel when a process flow runs.
If a route shape is added without route conditions, incoming data flows down ALL defined routes.
When you add a route shape to a process flow, the shape is added to your canvas with two placeholder route stems - for example:
To configure these routes (and add more if needed) click the 'cog' icon associated with this shape to access route settings.
Follow the steps below to configure route data for a route shape.
Step 1 Select a source integration and endpoint to determine where the incoming payload for the route shape is coming from - for example:
Step 2 Select a routing method to determine what should happen if a payload record matches conditions defined for more than one route:
These options are summarised below:
Follow all matching routes
If a record matches defined conditions for multiple routes, send it for onward processing down all matched routes.
Follow first matching route only
If a record matches defined conditions for multiple routes, send it for onward processing down the first matched route, but no more.
Step 3 Click the 'edit' icon associated with the first route:
Step 4 Enter your required name for this route - it's a good idea to ensure this provides an indication of the route's purpose. For example:
Step 5
If you intend to define multiple rules/filters for this route, select the required operator from the filter logic dropdown field:
Select AND
so all defined filters must apply for a match
Select OR
so any one of the defined filters will result in a match
If you only need to define one filter, just leave the default setting in place (it has no effect).
Step 6 Click the add new filter button:
Step 7 Filter settings are displayed:
From here, you can select a field (from the data schema associated with the source endpoint selected in step 1) - for example:
Alternatively, you can toggle the manual input option to ON and add syntax for dynamic variables:
The manual data path field supports metadata variables.
Step 8 Use remaining operator, type and value options to define the required filter.
When a string filter type is selected, you have the option to select the regex operator and then define a regex value. This provides greater flexibility if you can't achieve the desired results using standard operators. preg_match
is used for pattern matching.
The following data types are available when defining filters:
String
A string value.
Number
A numeric value.
Specific date
A selected date - choose the required date/time from a date picker.
Dynamic date
A +
/-
number of seconds
, minutes
, hours
, days
, months
, years
to be matched relative to the current date.
Boolean
A true/false value.
Step 9 Use the keep matching? toggle option to choose how matched records should be treated:
Here:
If the keep matching? option is toggled OFF, matched records are removed from the payload before it moves on down the flow for further processing.
If the keep matching? option is toggled ON, matched records remain in the onward payload, and all non-matching records will be removed.
Step 10 Click the save button (at the bottom of the panel) to confirm these settings. The new rule is added for your first route - for example:
Step 11 Repeat steps 6 to 10 to add any additional rules for this route. When you've added all required rules, click the save/update button at the bottom of the panel.
Step 12 Repeat steps 3 to 11 to configure the second route.
Step 13 Add any additional routes required using the add route button. Each time you add a new route, the canvas updates with an additional route stem from your route shape.
Step 14 Save your changes.
Having defined all required routes and associated conditions, the route shape on the canvas will have the corresponding number of route stems, ready for you to add shapes. For example:
Click the + sign for each branch in turn, then add the required shapes for each route flow.
The null to string transform function is used to convert incoming null
values to an empty string. For example:
...is converted to:
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select null to string from the other category:
Step 5 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The tracked data
page is used to view summary information for tracked data:
Data for the tracked data
page is updated every 10 minutes
(i.e. every 10 minutes the database is checked for new tracked items and these are pushed to the dashboard UI).
The tracked data
page shows a maximum of 15
entries
for any given tracked entity. So if you're tracking the same entity in more than 15 places, only the latest 15 entries are shown.
Tracked data summaries remain available for 15 days
, starting from when an entity was last tracked. For example, if you're tracking customer IDs and ID 0100001
was tracked 14 days ago but then again today, it will be visible for another 15 days (and so on until it's not seen for 15 days).
Times shown on tracked data summaries are UTC.
To access the tracked dat
page, select process flows
| tracked data
from the left-hand navigation menu:
To find tracked data information for a particular field value, start by selecting the associated entity type
- for example:
The entity type list includes all entity types that have been tracked.
Next, enter the value
that you want to to review - for example, if you are tracking customer IDs, you'd enter the required customer ID here:
As you type a value, the search updates instantly so you may notice that the list of available summaries changes as you type.
From here, you can click any summary to view details:
Each tracked data summary shows tracking information for the given entity in a flow run - for example:
Callback
triggers are used in conjunction with the , so you can send API requests to initialise a process flow and return data in a real-time, synchronous call.
When you add a callback
to a process flow , a unique Patchworks URL is generated. This URL should be used in your API request(s), so data can be returned from the . For more information please see our page.
Callbacks won't be initialised until the associated process flow is and .
Patchworks callback URLs are generated in the form:
For example:
The {{callback_id}}
element is a Patchworks signature, generated as a random hash that doesn't expire. This provides built-in authentication for our URLs however, they should still be kept private.
Follow the steps below to add a new callback
trigger.
Step 1
Click the settings
icon associated with the trigger shape in your process flow:
Step 2
Click the add new callback
button:
...a unique Patchworks callback URL is generated:
Step 3 Copy this URL for use in your API requests:
Step 4
By default, the payload returned for a callback is expected in JSON format, so the content-type
for callback responses will be set to JSON
. If you require a different format, you can edit settings for the callback URL - for example:
...now you can update the expected format:
Available formats are:
JSON
XML
TXT
Payloads are not validated against this setting - it simply determines the content-type
header value in responses.
Step 5 Save shape settings.
Step 6 Build the rest of your process flow as needed, including a callback shape at the point data must be returned to your API.
All process flows must begin with a trigger - this determines when and how frequently the process flow will run. For this reason, all are created with a already in place. You should edit this shape to apply your required :
Having accessed settings for a trigger shape, you can select the required trigger type - this determines any subsequent options that are displayed:
The following trigger types are available:
The track data
shape is used to track processed data, based on field paths that you define. When data passes through a track data
shape, the values associated with your defined field paths are tracked - which means they can be .
For example, you might want to track all customer_id
values that pass through a flow, so at any time you can quickly check if/when/how a given customer record has been processed.
By default, tracked data is for 15 days after it was last tracked.
Depending on data volumes, allow 10 minutes for tracked data to be visible in data pools.
The track data shape works with incoming payloads from a , a , an , or a .
, , and variables are supported when defining success messages.
JSON payloads are supported.
You can add as many track data
shapes to a process flow as required. For example, you might place one immediately after a receiving connector to track everything received before anything else happens to the data, and another after the final sending connector to track everything sent into your destination system.
To add and configure a new track data
shape, follow the steps below.
Step 1
In your process flow, add the track data
shape in the usual way:
Step 2 Configure settings as required - the table below summarises available fields:
Step 3 Save your settings., then access them again:
...you'll now see success criteria
options at the bottom of the shape settings drawer:
Step 4
The success criteria
options are optional. If you don't need to apply these then your setup is complete - close the settings drawer and continue with your process flow as required. If you do want to use these options, see below for guidance.
When data passes through a track data
shape, specified data fields are tracked and by default, tracked data is marked as a success.
However, there may be times where you want to control the conditions under which the status of tracked data is deemed a success
or a failure
, and to record this outcome for future reference. The success criteria
section allows you to:
When tracked data is marked as failed
, it is still tracked and the shape still processes successfully - for example:
In this run log, notice that tracked data is marked as a failure
(1) but tracked data is stored (2) and the track data step succeeds (3).
Any conditions that you want to apply can be added via filters. To define a new filter, click the add filter
button:
You can add as many filters as you need - multiple filters work together with an 'AND' operator. Remember that you're defining conditions that must be met for a success
outcome - if multiple filters are present they must ALL be matched. If one or more filters are not matched, the associated tracked data is marked as a failure
.
Filters can be based on any field(s) found in your data, irrespective of whether you chosen to track them.
No success criteria filters are defined
Success criteria filters are defined and the outcome is success
Success criteria filters are defined and the outcome is failure
can trigger process flows by listening for events that are published to message queues/topics by a message broker (e.g. ).
Once an event connector is , it becomes available for use as a process flow trigger.
New event connectors are available for selection in process flow as soon as they are saved successfully.
Event connector triggers only become active when the process flow is and .
ALL messages published to selected queues/topics are passed through to the process flow.
Follow the steps below to add an event connector as a process flow trigger.
Step 1 Click the settings icon associated with the trigger shape in your process flow:
Step 2 Click the add new event listener button:
...event options are displayed:
Step 5 Save the shape settings.
The trigger webhook option can be used if you want to trigger a process flow whenever a given event occurs in your third-party application.
When you choose to add a webhook to a process flow , a is auto-generated. This URL must be added to your third-party application, so it knows where to send event data.
How you use webhooks is driven by your business requirements, and the capabilities of your third-party application. For example, your third-party application might send a POST
request to a webhook which includes a batch of orders to be processed in the request body
, or the webhook body
might simply contain a notification message indicating that orders are ready for you to pull.
For webhooks to be received, a process flow must be and .
Patchworks webhook URLs are generated in the form:
For example:
The {{webhook_id}}
is a Patchworks signature which is generated as a random hash (that doesn't expire). This provides built-in authentication for our URLs however, they should still be kept private.
The default response for a successful webhook trigger is a status code of 200
, with the following body:
You can send both GET and POST requests to a webhook. The table below summarises support for passing in data with requests:
If a request includes both body
content and URL parameters
, the body
content takes precedence (URL parameters
are ignored).
Follow the steps below to add a new webhook trigger.
Step 1 Click the settings icon associated with the trigger shape in your process flow:
Step 2 Click the add new webhook button:
...a unique Patchworks webhook URL is generated:
Step 3 Copy this URL and paste it into your third-party application.
The documentation for your third-party application should guide you through any required webhook setup.
Step 5 Build the rest of your process flow to handle incoming data from your defined webhook(s).
If required, you can change the default response for your webhook by selecting the 'edit' icon associated with the URL - for example:
Here, you'll find options to select an alternative status code and specify new body text:
Here you can:
Use the status code dropdown field to select the required response code.
Enter the required text in the body field.
Select the required format for your body content (choose from JSON, XML or Plain text).
When a process flow runs, the payload for received data flows through to subsequent steps. In a straightforward scenario we pull data from one connection, then perhaps apply filters and/or scripts before mapping/transforming data fields and finally pushing the payload into a target connection. This is a very linear example - we start with a payload and it flows all the way through to completion.
However, more complex scenarios might need to use a payload that was generated several steps previously, or even from a different process flow. This is where the and shapes come in.
Wherever you place an add to cache shape shape in a process flow, it will cache (i.e. store a copy of) the payload as it stands at that point in the process flow. You can then use a load from cache shape to reference this payload elsewhere in the same process flow and/or in other process flows for your organisation (depending on how the add to cache shape is ).
For more information please see:
The add to cache shape is used to cache (i.e. store a copy of) the payload as it stands at that point in the process flow.
You can as many add to cache shapes as you like in a process flow. For example, you might place want to cache a payload as soon as it gets pulled from a source connection, and again later after it's been transformed. For example:
During routine platform maintenance, cached data may be cleared. While we make a best effort to retain data for up to 7 days, it could be cleared sooner. Please design your process flows accordingly.
The maximum cache size is 50MB.
Cache names must not include full stop (.) or colon (:) characters.
Cached data is stored in Amazon S3.
To add an add to cache shape to a process flow, follow the steps below.
Step 1 Find the point in your process flow where you want to cache the payload - typically this would be after a 'GET' connection shape, or perhaps after data has been mapped or manipulated via a script.
Step 2 Select the add to cache shape from the shapes palette:
Step 3 Click the create cache option:
...cache options are displayed:
Step 4 Click in the cache level > select cache field to choose when/where this cache will be available:
Choose from the following options:
Step 5 Enter a name for this cache:
The cache name must not include full stop (.) or colon (:) characters.
Step 6 If you have chosen a flow-level or company-level cache, you can set a data retention period to determine when this data will expire - for example:
The data retention period for a flow run-level cache is always 2 hours - this cannot be changed. The maximum retention period for a flow-level or company-level cache is 7 days.
Step 7 Save changes to exit back to add to cache settings where you can continue with your newly created cache.
Step 8 Click in the select a cache field and select your new cache from the list:
Step 9 Enter a cache key to identify this cache object - for example:
Your cache key
can be:
A cache key cannot exceed 128 characters.
Step 10 If you have multiple incoming payloads (typically where source data is paginated or has been through flow control), you should consider how these payloads are cached. The save all pages option determines cache behaviour for multiple incoming payloads:
Save all pages toggled ON. All incoming payloads are saved for your cache key. If you access the cache, you'll see each page listed with a page number - for example:
Save all pages toggled OFF. Data associated with the given cache key is overwritten each time one of the multiple payloads is saved - so only the final payload is saved - for example:
Step 11 Set the append option as required. If this option is toggled ON, incoming data is appended to the existing cache key each time an update is made. If this option is toggled OFF, the cache key is overwritten with new data each time.
Step 12 Save changes. The add to cache shape is added to your process flow, displaying the given name and key - for example:
The first word transform function is used to extract the first word of an incoming string value, based on a user-defined delimiter. For example, you might have product data in a string:
...and need to extract just the last item in the string for pushing to the destination system:
In this case, items in our source string value are delimited with a comma, so we can use this to determine the first word. The transform checks incoming string values and determines the 'first word' to be the word after the first occurrence of the given delimiter.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select first word from the string category:
Step 5 In the delimiter field, enter the character that delimits elements in the string:
If you use any of the following characters, they should be escaped:
.
+
*
?
^
$
(
)
[
]
{
}
|
\
/
For example, a delimiter of *
would be entered as:
\*
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice):
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The last word transform function is used to extract the last word of an incoming string value, based on a user-defined delimiter. For example, you might have product data in a string:
...and need to extract just the last item in the string for pushing to the destination system:
In this case, items in our source string value are delimited with a comma, so we can use this to determine the last word. The transform checks incoming string values and determines the 'last word' to be the word after the last occurrence of the given delimiter.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select last word from the string category:
Step 5 In the delimiter field, enter the character that delimits elements in the string:
If you use any of the following characters, they should be escaped:
.
+
*
?
^
$
(
)
[
]
{
}
|
\
/
For example, a delimiter of *
would be entered as:
\*
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes:
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The concatenate transform function is used to join the values for two or more source fields (using a given joining character) and then map the output of this transformation to a destination field.
For example, you might have a source system that captures the first name
and last name
for customer records, and then a destination system that expects this information in a single name
field.
In the instructions below, we'll step through the scenario mentioned above where our incoming payload includes the first name
and last name
for customer records, but our destination system expects this information in a single full_name
field. The steps required are detailed in two stages:
To begin, we need to update/add the required mapping row so that it includes all source fields that need to be joined and then pushed to the specified destination.
Step 1 In your process flow, access settings for your map shape:
Step 2
Find (or add) the mapping row which requires a concatenate transformation. In the example below, we have a row that's currently set to map the source first name
field into the destination full name
field:
Step 3 On the source side of the mapping row, we need to add all the fields that need to be joined. To do this, click the 'pencil' icon associated with the existing source field:
Step 4 Details for the selected field are shown - click the add source field button:
Step 5 Click the 'pencil' icon associated with the new source field:
Step 6 Move down and update the display name and payload fields for the second source field that you want to join - for example:
Step 7 Accept these changes to exit back to your mapping rows - notice that there are now two source fields associated with the row you updated:
Step 8 Repeat steps 3 to 7 to add any more source fields that you need to join.
With all required source fields defined for our mapping row, we can add a concatenate transform function to join the values for these fields.
Step 1 Select the add transform button for the required mapping rule - for example:
Step 2 Click the add transform button:
Step 3 Click in the name field and select concatenate from the string section in the list of transform functions:
...concatenate options are displayed:
Step 4 In the join character field, enter the character that you want to join each of your source fields - for example, a hyphen or a space:
Step 5 Click the add field button:
Step 6 Click in source fields and select the first source field to be joined:
Step 7 Accept your changes.
Step 8 Click the add field button again:
...and add the next source field to be joined - for example:
Step 9 Accept your changes.
Step 10 Repeat steps 8 and 9 to add any more source fields to be joined. Each time you accept a new source field you'll see the sequence that they will be processed when this transform function runs - for example:
Fields are joined in the sequence that they are added here.
Step 11 Having added all required source fields to be joined, accept changes:
...then save the function:
Step 12 Ensure that the target field for this mapping row is set as required, then save the map shape. Next time the process flow runs, the given source fields for this mapping row will be joined and then that value is pushed to the target. The example below shows an incoming payload before and after the concatenate transformation is applied:
We've already noted how the shape can be added to a process flow to cache the entire payload at a given point in the flow. The default behaviour is that when a process flow runs and hits an add to cache shape, any existing data associated with that cache is overwritten with a new payload from the new run.
However, it is possible to append data to a cache, so each time the process flow runs and the add to cache shape is reached, the current cache is appended to the existing cache. This works for any cache type (flow, flow run, and company).
Paginated data. If your connection shape receives paginated data, it's important to understand how the save all pages option works in conjunction with append. For more information please see our .
Cache size. Theoretically, if a cache is set to append data and then runs on a regular basis indefinitely, the cache size may grow to an unmanageable size. With this in mind, a limit is in place to ensure that a single cache cannot exceed 50MB.
Append data format. Appending cached data is supported for JSON only.
Shared caches. The append to cache operation is not atomic - as such we advise against multiple process flows attempting to update the same cache at the same time.
To use the append option, follow the steps below.
Step 1 - create your cache, then select it and add your cache key.
Step 2 Ensure that the save all pages option is set as needed. For more information about how this option affects appended data please see our .
Step 3 Enable the append option:
Step 4 A path to append to field is displayed:
Here, you need to consider the structure of the payload that you're passing in and specify a path that ensures that each new payload is appended in the right place.
Step 5 Save the shape. Next time the process flow runs the data will be cached and appended.
If you choose to view the payload for an add to cache shape, the payload will always show data from the latest run - for example:
However, when you add a load from cache shape, the payload will show ALL appended data so far - for example:
Always run the latest of this process flow. So, if the called process flow is edited and re-deployed at any point, the latest deployed
version will always be called.
Always run the latest of this process flow. So, if the called process flow is edited at any point, the latest edited (draft
) version will always be called.
Here, you can to access available tracked data summaries, and then choose to view .
Tracked data includes data tracked by and shapes.
A tracked data extension can be purchased as a , to increase the number of days tracked data remains available.
are displayed for the most recent process flow runs where this entity was tracked - for example:
You'll see one entry for each occasion that this entity was tracked in this run. In our example, we have two entries because the associated process flow includes two , and this entity passed through both.
Step 7 Ensure that your process flow is and - callbacks will not be made if this isn't done.
Define filter conditions that must be met for an entity's progress to be reported as a success
or failure
in for this tracked item.
Add a message to be displayed in for this tracked item.
In this context, tracked data marked as a failure
simply means that one or more filters defined for success were not matched and therefore this item is reported as a failure
in associated .
These filters work in the same way as in the dashboard - select/define a field, then set conditions and values.
The success or failure outcome from these filters is reported in the logs, and also in - for example:
You can define a message to be displayed in the for associated tracked data:
This message can be text-only, or any combination of text, , , and . For example:
In , this example is shown as:
Messages are added to when:
Step 3 Select a broker from the list (all configured are available for selection):
Step 4 Select a queue from the list - all configured for the selected broker (i.e. ) are available for selection:
Remember that event connector triggers only become active when the process flow is and .
If required, you can .
Step 4 If you want to customise the response for your webhook, click the edit icon associated with the URL and make the required changes. For more information please see .
Step 6 Ensure that your process flow is and - webhooks will not be received if this isn't done.
How long a cached payload remains available depends on the cache level selected when you configured .
When a process flow hits an add to cache shape, all data from the incoming payload is cached. With this in mind, ensure that your incoming data is , and/or as required.
The default behaviour is for the existing cache to be overwritten each time it is updated. Please see the page for information about appending data.
If you are adding a company-level cache, you may want to make a note of the key that you specify here, so it can be shared with other users in your organisation who may want to .
It's important to understand how the save all pages option works in conjunction with the append option. If you aren't sure, please see our page before proceeding.
For more information see our .
Yes. As with any other process flow shape, you can view the associated payload for an add from cache shape after the process flow has run. To do this, click the shape's tick icon and then select the payload tab in the - for example:
If you place an add to cache shape before a shape which generates multiple payloads (typically, a ), you can see each payload that is created via the payload dropdown - for example:
Cached data can be loaded via our load from cache shape. Please refer to the section for more information.
In our example, our source data is coming in via a manual payload so are defining the payload field manually - if you're using a to receive data, you'll be able to select the required field from the associated schema for your connection.
Step 9 Go to .
All source fields that were added for this mapping in will be available for selection here.
If required, can be specified here.
Receive / send
The flow direction for the associated tracked data.
Date & time
The date & time that this data was tracked.
Success / fail
If success criteria filters are defined in the track data
shape, the success/failure marker is determined by the outcome of these.
If success criteria filters are NOT defined in the track data
shape, the default marker is success
.
Message
If a success criteria message is defined in the track data
shape, it is shown here.
Body content
URL parameters
Flow run
The data associated with this cache is only available while the process flow is running. When a process flow run
completes, existing cached data is deleted. How this happens depends on whether the process flow is enabled & deployed.
Enabled & deployed process flows
In this case, the flow run
cache is cleared as soon as the process flow completes.
Draft/inactive process flows
In this case, we use a TTL (Time to Live) with a default of 2 hours to determine when the cached data is deleted. There's no chance that flow run
cached data could be re-used in the TTL deletion window - each time a flow run cache is used, a unique flow run id
is added to the cache key used to get and set data. Because every process flow run has a unique run id
, there's no possibility for another flow run to access the data from a previous run.
Flow
Data in the cache is retained after the process flow is run, so it can be loaded again within this process flow if required.
Cache retention
When you choose to add a flow
cache, retention options are available so you can decide how long cached data should be retained (you can set a time limit in seconds, minutes, hours, or days).
The default setting is 2 hours. This can be updated to a maximum of 7 days.
Company
The data associated with the latest update to this cache is available for use in this process flow and in any other process flows created within your company profile.
It's not currently possible to access different versions of a cache. So, each time a process flow runs with the same add to cache shape, the payload for that cache is overwritten/appended with the latest data and it's this that will be available to load from a company
cache.
Cache retention
When you choose to add a company
cache, retention options are available so you can decide how long cached data should be retained (you can set a time limit in seconds, minutes, hours, or days).
The default setting is 2 hours. This can be updated to a maximum of 7 days.
Static
Data is cached to the key exactly as it is specified. Typically used when your aim it to load the entire cache later in the flow (or in other flows).
orders
Dynamic
The cache key resolves dynamically using variables Typically used when your aim it to load single or multiple items from the cache later in the flow (or in other flows). For more information please see our Generating dynamic cache keys with variables.
order-[[payload.0.id]]
Source instance Source endpoint
If data is coming into the process flow via a connector shape, use these dropdown fields to select appropriate source connector details (i.e. the same instance and endpoint as configured for the previous connector shape). If data is coming into the flow via a non-connector source (such as a manual payload, API request, or webhook) then leave these fields blank.
Entity
If data is coming into the flow via a connector shape, this field will be set as required by default. Otherwise, select the entity type associated with the data field(s) that you want to track.
Note that the selection made here has no impact on how the shape performs - it simply determines how the tracked field is categorised in tracked data summaries.
Direction
If data is received via a connector shape, this field will be set as required by default. Otherwise, select the flow direction (send
or receive
) associated with the data field(s) that you want to track:
If the tracked data is being pulled from a source, set this option to receive
If the tracked data is being pushed to a destination, set this option to send
Note that the selection made here has no impact on how the shape performs - it simply determines how the tracked field is categorised in tracked data summaries.
Field paths
Define one or more data fields to be tracked - i.e. fields that you may want to look up in the event of a query.
This approach assumes that the cache to be loaded was added with a payload variable for the cache key, and is comprised of multiple, single-record payloads (having been through a flow control shape).
Each of these payloads has its own, unique cache key
(when data was added to the cache, this key was generated dynamically by resolving a cache key
payload variable).
For more information about this stage, please see Generating dynamic cache keys with payload variables.
When we come to load this data, we must target the required cache keys. In the same way that we use a payload variable to add data to a cache with dynamic cache keys, we can use a payload variable to load data from these keys.
To do this, you configure a load from cache shape with a 'multi-pick' payload variable in the cache key
, and ensure that data passed into this shape contains the values required to resolve this variable.
In summary, you can drop a single load from cache shape into a process flow and specify a payload variable as the required cache key
. This must be in the form:
...where <element>
should be replaced with whichever data element you will be passing in to to resolve the cache key. For example:
The <element>
defined here will be the same data element that was specified in the payload variable for the corresponding add to cache shape.
You then need to pass in any <element>
values that should be used to resolve required cache key
names. This might be achieved via a connection shape (if values are being generated from another system), or perhaps a manual payload shape. Whichever shape you use must be placed immediately before the load from cache shape.
To help understand how this approach works, we will step through an example.
Suppose we have the scenario where a process flow has been built to receives incoming orders, and another process flow needs to target specific orders received from this flow.
Process flow 1: Add to cache
To allow the second process flow access to orders processed by the first, we must add all incoming orders to a company
type cache in the first process flow (remember that company
type caches can be accessed by any other process flow created for your company profile). To ensure that we can go on to target specific orders from this cache later, we will cache every order in its own cache key, using a payload variable.
Process flow 2: Load from cache To retrieve specific orders from the cache created in the first process flow, we will pass the required order ids into a load from cache shape. These ids will be used to resolve dynamic cache keys, using a payload variable.
Here, we will batch an 'orders' payload into single order payloads - then we'll add each payload to its own cache key, which is created dynamically from a payload variable. Let's break these steps down:
Here, we will pass the required order ids into a load from cache shape. These ids are then used to resolve dynamic cache keys (via a payload variable) to determine which orders should be loaded. Let's break these steps down:
When an add to cache shape is dropped into a process flow, the entire incoming payload is cached and associated with the given cache key. Depending on the cache type, you can load this cache later in the same flow or in a different flow.
In the simplest scenario, your given cache key would be a static value (e.g. customers
) and you would use this to load the entire cache (containing perhaps tens, hundreds, even thousands of items) where required. But what if you want to load a specific item from a cache, rather than the whole thing?
This is where dynamic cache keys are so useful.
To load data from a cache, you configure a load from cache shape with the required cache
and a single cache key
. All data associated with your given cache key
is loaded.
Consider the example incoming payload below, where four records are cached with a static cache key
with a value of customers
:
If we were to configure a load from cache shape to access the customers
cache key, all four records would be loaded.
So, in order to load specific items from a cache, the incoming data must be added to a cache in such a way that we can easily target individual items. We need an efficient way to take incoming data, batch it into single-record payloads and add each of these to the cache with its own unique, identifying cache key - i.e.:
We can achieve this as follows:
The incoming payload is batched into multiple payloads - one payload per data
element (e.g. one order per payload, one
customer per payload, one product per payload, etc.).
Configure the add to cache shape and specify a payload variable as the cache key, where the variable looks for the first occurrence of a uniquely identifying element in the payload (typically an id or reference number).
The add to cache shape receives and
caches single-record payloads from the flow
control shape. The cache key for each
payload is generated dynamically by
resolving the payload variable from each
incoming payload.
Flow control is an easy way to batch incoming data into single-record payloads, however you may prefer an alternative approach. The important point is that the add to cache shape must receive single-record payloads - how you achieve this is up to you.
When you specify a dynamic variable as the cache key, the value for that variable is injected into the key. To prevent the case where large amounts of data are passed into the key, there is a character limit is 128 characters.
Follow the steps below to configure an add to cache shape with a payload variable for generating dynamic cache keys.
These steps assume that you have already defined a flow control shape (or some other means) to ensure that the add to cache shape receives single-record payloads.
Step 1 Drop an add to cache shape into your process flow, where required.
Step 2 In the add to cache shape settings, choose to create cache:
Step 3 Set the cache level and name as required and save changes.
For more information on these fields please see the add to cache shape page.
Step 4 Select the cache that you just created - for example:
Step 5 Move down to the cache key field and enter the required key. Here, you use standard payload variable syntax to define your target data element:
...where schema notation
should be replaced with the notation path to the first occurrence of the required element in the payload which should be used to form the cache key. If required, you can also include a static prefix or suffix. For example:
The output of the payload variable will be used as the cache key.
Our example uses dynamic payload variables however, you can also use metadata variables and/or flow variables. For more information please see dynamics variables section.
Step 6 Save the add to cache shape settings.
Cached data can be loaded via our load from cache shape. Please refer to the Load from cache shape section for more information.
If required, you can import existing data into a de-dupe pool. For example, you may have records that you know have been processed elsewhere and want to ensure that they aren't processed via Patchworks.
Conversely, you can export de-dupe pool data to a CSV file, for use outside of Patchworks.
De-dupe data exports are completed in CSV format, delimited ONLY with a single comma between fields.
The exported file includes two columns with value
and entity_type_id
headers. For example:
When de-dupe data values are imported:
All records in the import file are added to the data pool as new items
Any existing items in the data pool are unchecked and unchanged
To import de-dupe values, the import file must be in the same format as export files above, with the same headers. I.e.:
Where:
The value
is the key field value that you are matching on
The entity_type_id
is the internal Patchworks id for the entity type associated with the key field that you are using to match duplicates. This id must be present for every entry in your CSV file. You can download a list of ids by following steps detailed later in this page.
Import files cannot exceed 5MB.
To export/download a de-dupe data pool, follow the steps below.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 Click the name of the data pool that you want to export:
Alternatively, you can create a new data pool.
Step 3 With the data pool in edit mode, move to the lower tracked de-dupe data panel and click the download button:
Step 4 The download job is added to a queue and a confirmation message is displayed:
Step 5 When your download is ready, you'll receive an email which includes a link to retrieve the file from the file downloads page. If you can't/don't want to use this link, you can access this page manually - click data pools in the breadcrumb trail at the top of the page:
...followed by the settings element option:
Step 6 Select the file downloads option from the settings page:
Step 7 On the file downloads page, you'll find any exports that have been completed for your company profile in the last hour. Click the download button for your job - the associated CSV file is saved to the default downloads folder for your browser.
This list may include exports from different parts of the dashboard, not just data pools (for example, run log and cross-reference lookup data exports are added here).
Step 8 Click the download button for your job - the associated CSV file is saved to the default downloads folder for your browser.
Download files are cleared after one hour. If you don't manage to download your file within this time, don't worry - just run the export again to create a new one.
If you want to import data into a de-dupe data pool, you need to ensure that each record in your CSV file includes an entity_type_id. To find which id you should use, follow the steps below to download a current list.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 Click the download entity types button at the top of the page:
Step 3 A CSV file is saved to the default downloads folder for your browser.
To import data into a de-dupe data pool, follow the steps below.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 If you want to import data into an existing data pool, click the name of the required data pool from the list:
Alternatively, you can create a new data pool.
Step 3 Move to the lower tracked de-dupe data panel and click the import button:
Step 4 Navigate to the CSV file that you want to import and select it:
Step 5 The file is uploaded and displayed as a button - click this button to complete the import:
Step 6 The import is completed - existing values are updated and new values are added:
You may need to refresh the page to view the updated data pool.
Understanding how pagination options impact what data is cached.
When you drop an add to cache shape into a process flow, there are two options that you should consider if your selected endpoint paginates the data that is received OR you generate multiple payloads in some other way (for example, via the flow control shape). These options are: save all pages
and append
.
Together, these two options determine how multiple payloads are cached, so it's important to understand the implications of each.
On this page we focus on paginated data however, the same principles apply whenever multiple payloads are cached, irrespective of whether those payloads are generated via pagination or some other means (for example, via the flow control shape).
When paginated data is pulled from a connection shape, a payload is created for each page - you can see these in the run log payload tab:
If you are caching paginated data and choose to toggle the save all pages
option to on
, the payload for each page is saved with its page number and a unique key
. For example:
The unique key
is generated dynamically, by adding the page number to your specified cache key. If the cache is a flow run
type, the unique key
will also incorporate the flow run id
.
It's important to note that every time a connection shape pulls paginated data, page numbers reset to 1.
When the append option is toggled ON, incoming payloads are appended to cache keys. How this works depends on the save all pages option:
The given cache key
is overwritten each time a payload is cached. As such, the cache key
will only ever include data from the LAST payload received.
The first time that multiple payloads are received, each one is saved to its own unique key
, against your specified cache key
.
Next time the cache receives data, any existing unique keys
associated with your specified cache key
are overwritten. Additional unique keys
are created for new payloads/pages as needed).
As such, each unique key
will only ever contain the latest data for the correlating payload/page number.
Each payload is appended to your specified cache key
. As such, data in the cache key continues to grow with each data pull - nothing is overwritten.
The first time that multiple payloads are received, each one is saved to its own unique key
, against your specified cache key
.
Next time the cache receives data, any existing unique keys
associated with your specified cache key
are appended with the latest data from correlating payload/page numbers. Additional unique keys
are created for new payloads/pages as needed).
As such, data associated with existing unique keys
continues to grow with each data pull.
The diagram below illustrates this:
For information about setting the append option, please see our Appending data to a cache page.
This approach is the simplest - all incoming data is cached with a static cache key.
In the example below, all incoming customer records will be added to a cache named ALLcustomers
and a static cache key named customers
:
When the data is cached, it's likely that the cache will include multiple records - for example:
To retrieve this cache, we simply drop a load from cache shape where required in the process flow and specify the same cache and cache key that were defined in the corresponding add to cache shape:
This approach assumes The load from cache shape works as normal to retrieve cached data where the cache was created with a payload variable - you choose the cache name and key to be loaded:
However, the important point to consider is that the cache key that you specify here will have been generated from the payload variable that was specified when the cache was created.
If a payload variable has been used to cache data, you would typically have included a flow control shape to create multiple payloads - for example:
So you will have multiple cache keys that can be loaded. To do this, you can add one load from cache shape for every cache key
that you want to retrieve, specifying the required key in each case. For example:
Alternatively, you can add a single load from cache shape and target specific cache keys by passing in the required ids.
Loading data from a cache is very straightforward using the load from cache shape, however you do need to consider what data you want to load. You can:
Each of these options requires a slightly different approach, as summarised in the diagram below and explained in subsequent sections:
The load from cache shape is used to retrieve a stored payload from an existing cache key (created from an add to cache shape).
You might configure a load from cache shape in the same process flow as the original add to cache step or - if a cache was added and set to company level - you might choose to load it in a different process flow.
To add a load from cache shape to a process flow, follow the steps below.
Step 1 Find the point in your process flow where you want to load the payload from a cache - this could be at the very start of a process flow, or perhaps somewhere further down.
Step 2 Select the load from cache shape from the shapes palette:
Step 3 Click in the select cache field and choose which cache you want to retrieve:
In this list, you'll find any caches that have been added to this process flow (via the add to cache shape), together with any caches that have been added to other process flows and set to a cache level of company.
Step 4 Enter the cache key that you want to retrieve - for example:
Your given cache key might be static or dynamic, depending on how the cache was configured in the corresponding add to cache shape:
Static
Data is cached to the key exactly as it is specified. Typically used when your aim it to load the entire cache later in the flow (or in other flows).
orders
Dynamic
order-[[payload.0.id]]
OR
order-[payload.*.id]]
For detailed information about each of these approaches, please see What cached data do you want to load?
The cache key must be associated with an existing add to cache shape, either in the same process flow or (in the case of company-level caches) in another process flow.
Step 5 If you want this process flow to fail if for any reason this cache can't be retrieved, tick the fail on cache miss option:
If you leave this option un-ticked, the process flow will continue to run if the cache can't be loaded.
Step 6 If the cache that you're loading was created with the save all pages option toggled ON, you should toggle the load all pages option ON when loading this data:
When paginated data is pulled from a connection shape, a payload is created for each page. If the save all pages option is toggled ON when a cache is created, the payload for each page is saved to its own cache key (with key names generated dynamically from a specified key and page numbers). If the save all pages option is toggled OFF, all pages are saved to a single cache key. For more information please see our Cache pagination options.
Step 7 Save changes. The load from cache shape is added to your process flow, displaying the given name and key - for example:
Yes. As with any other process flow shape, you can view the associated payload for a load from cache shape after the process flow has run. To do this, click the shape's tick icon and then select the payload tab in the run log panel - for example:
You can view and manage all existing caches from the data caches page - to access this page, select caches from the dashboard navigation menu.
During routine platform maintenance, cached data may be cleared. While we make a best effort to retain data for up to 7 days, it could be cleared sooner. Please design your process flows accordingly.
The data caches page is split into three sections: flow run caches, flow caches, and company caches:
Each cache is listed with the following details:
Name
Flow
For flow
and flow run
caches, this is the name of the process flow which is using the cache. This information is not shown for company
caches, as the cache might be used in any process flow within a company profile.
Created
The date and time that the cache was created.
Last accessed
The date and time that the cache was last accessed by a process flow. The cache may or may not have been updated with data at this time (even if there is no data to be added, the access date/time is logged).
Keys
Size
The current size of the cache in proportion to the limit.
Delete the cache (and all associated data).
If you have a lot of caches, you can search by name:
To access cache details for a particular cache, click on its name:
When you select a cache from the list, an edit cache page is displayed:
From here you can:
To change the name of the cache, simply update the name field in the upper cache details panel, then click the save button.
When the name is updated and saved, the change is immediately reflected in any add to cache shapes in process flows, where this cache is used.
The cache name must not include full stop (.) or colon (:) characters.
You can use the maximum age slider to change the cache retention period for a cache:
Note that:
The maximum age for a flow run cache is 2 hours - this cannot be changed
The maximum age for a flow or company cache can be changed to anything up to 7 days
The usage panel shows general usage information about the cache:
Here you can see:
Size
The current size of the cache, shown with a percentage use indicator. The maximum cache size is 50MB
Created
The date and time that the cache was created.
Last accessed
The date and time that the cache was last accessed. This timestamp is updated even when no data was added to the cache.
Keys
The number of keys associated with this cache.
The cache contents panel displays an entry for each cache key update. Information shown varies, depending on the cache type.
The following details are displayed for each cache item in a flow run-level cache:
Flow Run ID
The unique id of the process flow run that updated the cache key.
Started at
The date and time that the process flow run was started.
Key
The cache key name.
Page
Unique key
The internal cache key.
Size
The size of the cache key.
The following details are displayed for each cache item:
Key
The cache key name.
Page
Unique key
The internal cache key.
Size
The size of the cache key.
To clear all current content in the cache, click the clear cache button:
This removes any existing data but leaves the cache in place so it can still be used in process flows.
If caches have been added to your process flow or company-level caches have been added for use in any process flow, you can reference these in field mapping transformations.
Using a cache lookup transformation function, you can look up values from a cache and map them to fields in a target system.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when referencing a cache we don't select a source field - the specified cache data is our source.
Step 1 In your process flow, access settings for the map shape that you want to update:
Step 2 Click the add mapping rule option - for example:
Step 3 Click the add transform button:
Step 4 Click the add transform button:
Step 5 Click in the name field to access a list of all available transform functions, then select cache lookup:
Step 6 Cache reference fields are displayed:
Complete these fields using the table below as a guide:
Cache
Use the dropdown list to select the cache that you want to reference. Available caches will be:
All flow-level caches added for this process flow
All company-level caches added from any process flow
All flow run-level caches created for this run
Key
Lookup
You can use dot notation to look up specific elements from the cached payload. If you leave this field blank, the full cached payload is retrieved.
Default
If required, specify a default value to be used if the cache lookup transform doesn't find a value to return.
Load all pages
Fail on miss
If this option is togged ON, the map shape (and therefore the process flow) will fail if the cache lookup can't be fulfilled.
Step 7 Accept your changes:
...then save the transformation:
Step 8 Now you can select a target field in the usual way. Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the specified cache values will be mapped to the target field.
The steps detailed above show how to configure the cache lookup transform with a known cache key. However, it's possible to populate the cache key automatically, using the output from a previous transform function.
To do this, you add a mapping row in the usual way and define any required transform functions to produce the required value for cache keys. Once this is done, add a cache lookup transform function (as shown above) but leave the key
field blank.
When the key
field is blank, output from the previous transform function for the mapping is applied.
Suppose you have a cache where multiple cache keys have been defined in the form:
itemref
-last_name
For example:
1000021-Smith
Now suppose you want to define a cache lookup transformation which will determine the key by manipulating mapped fields. You would:
Add a mapping row with two source fields - one for itemref
and another for last_name
.
Select itemref
as the target field.
Add a concatenate transform function to join itemref
and last_name
fields with a hyphen.
Add a cache lookup transform function as defined above, but leave the key
field blank
When the process flow runs, output from the concatenate transform function will be applied as the key
for the cache lookup transform function.
The example above describes how you might use a concatenate transform function as the means to generate a cache key however, the output from any transform function can be used.
Trigger schedule options are used to schedule the associated process flow to run at a specified frequency and/or time. Here, you can use intuitive selection options to define your requirements or - if you are familiar with regular expressions - use advanced options to build your own expression.
Trigger schedules are based on Coordinated Universal Time (UTC).
Schedules can be defined based on the following occurrences:
Define a schedule to run every x
minutes - for example:
Define a schedule to run every x
hours - for example:
Define a schedule to run on selected days of the week at a given start time - for example:
Use the every dropdown list for quick daily presets, or define custom settings:
Define a schedule to run on selected days of the month or weeks, for selected months, at a given start time - for example:
Use the every dropdown list for quick monthly presets, or define custom settings:
If you are familiar with cron expressions, you can select this option to activate the cron expression field beneath and enter your required expression directly:
Patchworks supports the 'standard' cron expression format, consisting of five fields: minute, hour, day of the month, month, and day of the week.
Each field is represented by a number or an asterisk (*) to indicate any value. For example:
0 5 * * *
would run at 5 am, every day of the week.
Extended cron expressions (where six characters can be used to include seconds or seven characters to include seconds and year) are not supported.
Follow the steps below to add a new trigger schedule.
Step 1 To add a new schedule, click the add new schedule button:
Step 2 Select an occurrence.
Step 3 Define your required settings for the occurrence.
Step 4 Click save to save this schedule. The schedule is added to the shape - for example:
You can add a single schedule, or multiple schedules. When you add multiple schedules, ALL of them will be active.
Using the try/catch shape, you can build your own path to handle process flow sync exceptions elegantly.
Place a try/catch shape before key steps in your flow, then configure its settings to determine behaviour when exceptions are found. Once this is done, the shape is added to the canvas with two routes - one for try
and one for catch
:
For the try
route, build your flow in the usual way to achieve the required result. For the catch
route, define a flow that should be followed for exceptions. For example, you might add exceptions to a cache (so they can be processed subsequently) and then notify specified contacts that exceptions have occurred.
The notify shape can be very powerful when used with the try/catch shape. Keep in mind that you can include meta, flow, and payload variables to define notification messages. For example:
When the process flow runs, data flows down the try
route and ideally completes without exceptions. However, if an exception is found, the associated payload is removed and sent along the catch
route.
You can add one try/catch shape per process flow
You can add one try/catch shape per process flow
If a connector needs to retry authentication, the retry is NOT caught (i.e. it's not sent into the catch
route). If re-authentication is successful the flow continues as normal (i.e. along the try
route), otherwise the process flow fails.
As noted above, if an exception is found it gets removed (as a failed payload
) and sent along the defined catch
route. For example, if a try/catch shape receives 20 payloads and finds 4 exceptions, then 4 failed payloads are sent along the catch
route.
Failed payloads can be found on the failed payloads
tab in run logs - for example:
To add and configure a new try/catch shape, follow the steps below.
Step 1 In your process flow, add the try/catch shape in the usual way:
You can add one try/catch shape per process flow. It's up to you where you place this in your flow, but it's generally a good idea to add it at the very start to ensure that all steps are checked.
Step 2 Access settings for the newly placed shape:
Step 3 Choose the action to take if an exception is encountered:
Available options are summarised below:
Succeed as partial success
The flow completes and, where possible, data is synced.
The flow is logged with a status of partial success
.
Fail flow
The flow completes and, where possible, data is synced.
The flow is logged with a status of failure
.
In this case, the flow is marked as a failure
(and will show as such in run logs) but it's important to note that this does not cause the process flow to stop - any valid payloads continue through the flow.
It's likely that you would only use this option instead of succeed as partial success
if logging any failed payloads as a general flow failure is important from a reporting/metrics perspective.
Fail flow & retry
The flow completes and, where possible, data is synced.
The flow is logged with a status of retried
.
The flow is retried with ALL data.
The flow is retried ONCE only.
Step 4
Save settings to return to the canvas and build your try
and catch
routes as required.
The Callback feature allows you to send an API request which initialises a process flow and returns data in a real-time, synchronous call.
This means you can incorporate process flows in your online systems where instant data retrieval is needed - for example, as part of an online checkout process (check inventory levels, find delivery lockers, check gift card balances, etc.).
Implementing a callback shape requires two elements:
Callback trigger. Similar to webhooks, callback triggers are unique Patchworks URLs you can use in API calls to trigger the associated process flow and receive callback data.
Callback shape. Add this shape to your process flow at the point you want incoming data to be returned.
Your process flow must include a callback trigger (a unique Patchworks URL generated from the trigger shape) and a callback shape (placed in the flow at the point you want data returned to your API).
When you send a GET or POST request to the Patchworks callback trigger URL, the associated process flow is initialised - a callback connection is opened and all shapes up to the FIRST callback shape are processed in a dedicated callback queue. This queue takes priority over all other queues, ensuring that any processing needed to obtain callback data is completed at maximum speed.
When the callback shape is reached, current data is returned (the content-type
header for this data is set via settings for the callback trigger). If multiple payloads are passed into the callback shape, they are returned as an array.
Once the payload is returned, the callback connection is closed and any subsequent shapes are processed via normal queues. This process is shown below:
All customers access the same callback queue, so performance depends on the complexity of your process flows and general platform load.
For faster speeds, your company can purchase a dedicated callback queue. Please contact our Sales team for details.
A Patchworks bolt-on is required to use callbacks - please contact us for pricing information.
A callback can only be made if the process flow is enabled and deployed.
For a callback to work, your process flow must include both a callback trigger and a callback shape.
If a process flow includes a callback trigger but no callback shape, the flow runs normally (without errors or warnings). However, with no mechanism to determine when/what data to return, the callback connection stays open for 60 seconds and closes when no payload is found.
If a process flow includes a callback shape but no callback trigger, the flow won't have an initiating URL for returning payload data from the callback shape. The flow runs normally (without errors or warnings) but no data is returned.
When a process flow is initialised with a callback trigger, a callback connection is opened and we check for a returned payload every 200 milliseconds. If no payload is found within 60 seconds, the connection is closed.
You can send GET and POST requests to a callback trigger.
All callback responses are contained in an array.
A process flow can include multiple callback shapes however these should be placed with care to ensure optimal performance - see Best practice for callback shape placement for more information.
To initialise a process flow via a callback, you add one or more callbacks to the trigger shape - for example:
This unique URL should be copied and used in your API requests. For information about working with the trigger shape, see Trigger shape (callback).
Callback shapes are added in the usual way - build your process flow to the point you want data to be returned, and add a callback shape from the shapes palette:
The callback shape is an 'opt-in' advanced feature. If your subscription includes advanced features and you'd like to use callbacks, please contact us to enable it.
Having added a callback shape to your process flow, click the settings icon:
Now you can choose a response code to be returned:
Available response codes are:
200
OK
201
Created
400
Bad request
By default, if a callback shape receives multiple payloads, these are output as a single array. The first payload
option can be toggled on if you only want to return the first payload received:
For sample responses, see our callback responses section below.
The essence of a callback shape is returning required data as fast as possible, so we need all shapes leading up to it to be processed quickly. The priority callback queue helps with this, but there are a couple of process flow design points to keep in mind for optimal performance.
With the default 60-second callback connection window in mind, we advise keeping the path up to your callback shape as lean and efficient as possible - the more shapes placed before a callback shape, the greater the chance of a timeout.
Before a callback shape, focus only on getting data that needs to be returned and then handle anything else in the flow after the callback shape.
The callback connection window can be increased from 60 seconds to a maximum of 120 seconds - please log a request with the Patchworks support team if this is required.
All shapes up to the FIRST callback shape are processed via the priority callback queue. A process flow can include as many callback shapes as you like however, once the FIRST callback shape is hit, all subsequent shapes (including additional callback shapes) are processed via standard queues. This is shown below:
By default, callback responses are always returned in an array. Where multiple payloads/pages are generated (e.g. where data is paginated or passed through flow control), each payload/page is returned within that initial array. JSON and XML sample responses are shown below.
You can change the behaviour for multiple payloads using the return first payload
option. If this option is togged on
and the callback
shape receives multiple payloads, only the first payload is returned (without the initial array).
Sample responses for all scenarios are provided below:
Process flow runs triggered from a callback are logged in the usual way, with the triggered by
type set to callback
- for example:
When data is returned via a callback, a Pw-Flow_Run
response header is included, where the value
is the flow run id
- for example:
You can use this to search for the associated entry in run logs and view details for the run:
If you have defined custom scripts for use in process flows, use the script shape to select a script to apply at a given point in a process flow.
You can use any version of a script which has been saved and deployed.
Creating a custom script is an advanced feature which requires some in-house development expertise.
If a script fails it is retried three times (automatically) before a failure is given.
A script will time out if it runs for more than 120 seconds.
Step 1 In your process flow, add the script shape in the usual way:
Step 2 You're prompted to select an existing script:
Step 3 Select the script that you want to use at this point in the process flow:
The list of available scripts only includes scripts which are currently deployed for use.
Step 4 The latest deployed version of the script is added to the shape - for example:
Code is displayed in view-mode. If you need to change the script, save your shape now and then use the left-hand navigation bar to access process flows > custom scripts.
Step 5 Unless you have a specific reason to do otherwise, we advise using the latest version of scripts. However, if you do need to use a previous version of the script, select the 'versions' dropdown field to make your selection - for example:
Step 6 Save the shape:
To view/change the selected script for an existing script shape, click the associated 'cog' icon:
From here, the existing script is displayed - you can either select a different script, or a different version of the existing script:
Remember that the script code can't be changed here. If you need to change the script, save your shape now and then use the left-hand navigation bar to access process flows > custom scripts.
The de-dupe shape is used to identify and then remove duplicate entries from an incoming payload. For more background information please see our De-dupe shape page.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
Currently, the de-dupe shape supports JSON payloads.
To add and configure a new de-dupe shape, follow the steps below.
Step 1 In your process flow, add the de-dupe shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be de-duped originates - for example:
If your incoming data is via manual payload, API request, or webhook then you can remove any default source instance and endpoint selections:
Step 3 Move down to the behaviour field and select the required option.
For more information about these options please see our De-dupe shape behaviour section.
Step 4 Move down to the data pool field and select the required data pool.
If necessary, you can create a data pool 'on the fly' using the create data pool option. For more information please see Adding a new data pool via the de-dupe shape.
Step 5 In the key field, select/enter the data field to be used for matching duplicate records. How you do this depends on how the incoming data is being received - please see the options below:
The selection that you make here determines how the payload is adjusted when duplicate data is removed. For more information please see How duplicate data is handled.
Step 5 Select the payload format:
Step 6 Save the shape.
When designing process flow, it can be useful to skip specific shapes to speed up the testing process - for example, you might skip a receiving connector shape so you're not pulling actual data for tests, and enter a small, manual payload instead.
When a shape is marked as skip
, whatever the shape is designed to do does not happen - this step in the flow is ignored.
All shapes except the trigger shape can be skipped.
You can skip any number of shapes in a process flow.
Skipping a route shape or a try/catch shape effectively ends the flow run at that point since any defined routes won't start.
When a shape is set to skip and then saved, that shape's appearance in the flow is faded and summary actions are crossed through - a skipped
label is also shown.
Skipped shapes show as warnings in logs with a flow step skipped
message.
Step 1 In your process flow, click the 'settings' icon for the shape that you want to skip - for example:
Step 2 Set the skip toggle option (at the top of the settings drawer) to ON - for example:
Step 3 Save settings:
Step 4
Repeat for all shapes that you want to skip. These shapes are marked as skipped
and their appearance is faded with summary actions crossed through - for example:
Step 5 Run/test your process flow as normal. In the run logs, you'll notice that any skipped shapes are noted as such - for example:
Step 6 When you're ready to reinstate a skipped shape, access shape settings and toggle the skip option back to OFF, then save changes.
This approach assumes that the cache to be loaded was , and is comprised of multiple, single-record payloads (having been through a shape).
Each of these payloads has its own, unique cache key
(when data was added to the cache, this key was generated dynamically by resolving a cache key
payload variable).
For more information about this stage, please see .
When we come to load this data, we must target the required cache key. If you only want a single item, the quickest way is to specify the resolved cache key.
The shape works as normal - you choose the cache
and cache key
to be loaded:
Consider the following process flow:
Here, our manual payload contains customer data as below:
To allow us to target specific customer records from this payload, we send it through a flow control shape, which is set to creating one payload for customer:
...so now we have lots of payloads to be cached:
If we look at the payload for the first of these, we can see it contains a single customer record - notice that there's an id
field with a value of 1000000001
. This field uniquely identifies each record.
Next we define an add to cache shape - we create a new cache and use a payload variable to generate a dynamic cache key for each incoming payload:
Here, they payload variable is defined as:
where:
customer-
is static text to prefix the resolved variable.
[[payload.]]
instructs the shape that this variable should be resolved from the incoming payload.
0
denotes that the first occurrence of the following item found in the payload should be used to resolve this variable.
id
is the name of the field in the payload to be used to resolve this variable
So, if we take our first payload above:
...our payload variable would resolve to the following cache key:
This is what we use in our load from cache shape:
The de-dupe
shape can be used to handle duplicate records found in incoming payloads. It can be used in three modes:
Filter
. Filters out duplicated data so only new data continues through the flow.
Track
. Tracks new data but does not check for duplicated data.
Filter & track
. Filters out duplicated data and also tracks new data.
A process flow might include a single de-dupe
shape set to one of these modes (e.g. filter & track
), or multiple de-dupe
shapes at different points in a flow, with different behaviours.
Tracked de-dupe
data is retained for 90 days after it's added to a data pool.
Tracked de-dupe
data can be interrogated via the - by default it's available here for 15 days.
The de-dupe
shape is not atomic - as such we advise against multiple process flows attempting to update the same data pool at the same time.
The de-dupe
shape works with incoming payloads from a , and also from a , , or .
JSON and XML payloads are supported.
The de-dupe
shape is configured with a , a , and a :
As noted previously, the de-dupe
shape can be used in three modes, which are summarised below.
You can have multiple de-dupe
shapes (either in the same process flow or in different process flows) sharing the same data pool. Typically, you would create one data pool for each entity type that you are processing. For example, if you are syncing orders via an 'orders' endpoint and products via a 'products' endpoint, you'd create two data pools - one for orders and another for products.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
The key field
is the data field that should be used to match records. This would typically be some sort of id
that uniquely identifies payload records - for example, an order id
if you're processing orders, a customer id
if you're processing customer data, etc.
When duplicate data is identified it is removed from the payload however, exactly what gets removed depends on the configured key field
.
If your given key field is a top-level field for a simple payload, the entire record will be removed. However, if the payload structure is more complex and the key field is within an array, then duplicates will be removed from that array but the parent record will remain.
Let's look at a couple of examples.
If multiple fields are specified, these values are tracked as one, concatenated value. To track multiple fields separately, use one shape per field.
If data is received via a connector shape, you can navigate the associated data structure to select a field for tracking - for example: If data is received via a non-connector source (such as a manual payload, API request, or webhook), enter a path to the required field manually - for example:
Place a shape immediately before the add to cache shape and configure it to create batches of 1 at the appropriate level for your data.
The cache key resolves dynamically based on a payload variable. Typically used when your aim it to load single or multiple items from the cache later in the flow (or in other flows). For more information please see our .
The name that was specified when the cache was created. Caches are created via the .
The number of cache keys associated with this cache. Cache keys are created via the .
If multiple pages are added to a cache (for example, if incoming data is paginated or batched via ) and the save all pages option is togged ON), each page is listed individually.
Click the 'eye' icon to view the content associated with this key. For example:
If multiple pages are added to a cache (for example, if incoming data is paginated or batched via ) and the save all pages option is togged ON), each page is listed individually.
Click the 'eye' icon to view the content associated with this key. For example:
Enter the key that was specified in the for the cache that you want to access here. Alternatively, if this transformation is preceded by another transformation function, you can leave this field blank and pick up a value from the output of the previous function. For further information please see the section.
When paginated data is pulled from a , a payload is created for each page. If the save all pages option is toggled ON when a cache is created, the payload for each page is saved to its own cache key (with key names generated dynamically from a specified key and page numbers). The load all pages option here can be used if you want to lookup all of these pages.
Failed payloads for exceptions are removed and are available from .
Failed payloads for exceptions are removed and are available from .
Use this option with care. When the process flow is retried, all data is processed again. If you have any doubt as to whether duplicate records will be handled correctly, we advise using a different action
and manage exceptions separately (for example, and process from there).
A retry will only happen if the process flow is , and is NOT triggered manually.
Failed payloads for exceptions are removed and are available from .
However, the important point to consider is that the cache key
that you specify here will have been generated dynamically by resolving the payload variable that was specified .
Data pools are and are used to organise de-dupe data. Once a data pool has been created it becomes available for selection when configuring a de-dupe
shape for a process flow.
When data passes through a de-dupe
shape which is set for tracked
behaviour, the value associated with the for each new record is logged in the data pool. So, the data pool will contain all unique key field values that have passed through the shape.
cache: customerData cache key: customer-1000000001
cache: customerData cache key: customer-1000000002
cache: customerData cache key: customer-1000000003
cache: customerData cache key: customer-1000000004
Filter
Remove duplicate data from the incoming payload so only new data continues through the flow. New data is NOT tracked.
Track
Log each new key value received in the data pool.
Filter & track
Remove duplicate data from the incoming payload AND log each new key value received.
Step 1: Manual payload
The manual payload shape contains an 'orders' payload with 17 orders in total.
Step 2: Filter
The filter shape ensures that orders are only processed if the id
field is not empty.
Step 3: Flow control
The flow control shape is set to create batches of 1 from the payload root level - so every order will be added to its own payload.
Step 4: Add to cache
The add to cache shape is defined to add to a company
type cache, named CPT-722. The cache key
is created dynamically, where the first part is always order-followed by the value of the first id element found in the incoming payload. E.g. order-5697116045650
. All data from the incoming payload will be added to this cache key. Taking our example using flow control, the incoming payload will only ever be a single order.
Step 5: Run flow
When this process flow runs, checking payload information for the add to cache shape shows that 17 payloads have been cached - one payload for each order.
Step 1: Manual payload
The manual payload shape contains two order ids that we want to load from our cache.
Step 2: Load from cache
The load from cache shape is configured to load data from our CPT-722 cache, targeting dynamic cache keys from order-[[payload.*.id]]
. Here, the required cache key(s) will be resolved from all (*
) ids found in the incoming payload - in this case order-5693105439058
and order-5697116045650
.
Step 3: Run flow
When this process flow runs, checking payload information for the load from cache shape shows that two payloads have been loaded - one for each of our given ids.
Data pools store data entities that have been tracked via de-dupe shape and track data shapes.
Data pools are created and managed via the data pools option in general settings. From here you can add a new data pool, or view/update an existing data pool.
For more background information on data pools please see de-dupe shape and track data pages.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
De-dupe data pools can be created in two ways:
You can access existing data pools from general settings.
Step 1 Select the settings option from the bottom of the dashboard navigation bar:
Step 2 Select data pools:
...all existing data pools are displayed:
For each data pool you can see the creation date, and the data that it was last updated by a process flow run.
Step 3 To view details for a specific data pool, click the associated name in the list:
...details for the data pool are displayed:
In the top panel you can change the data pool name/description (click the update button to confirm changes) or - if the data pool is not currently in use by a process flow - you can choose to delete it.
In the lower panel you can see all data in the pool. This data is listed with the most recent entries first - the following details are shown:
Value
Created by
The name of the process flow where this entry was tracked into the data pool. Click this name to open the associated process flow.
Updated at
The date and time that the record was added to the pool (UTC time).
The value of the field that was identified as a match for duplicate records. This is the field defined as the key
to be used for de-dupe shapes - for example:
In this example, the de-dupe ke
y is set to id
, so the value
field shown in the data pool will display id
values.
Branch
Define multiple branches to be executed sequentially, using the same data.