Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Shapes detailed in this section are available as standard, for all core subscription tiers:
Typically, a process flow run is triggered and a request for data is made via a connector shape - if the request is successful, data is retrieved and the flow continues.
However, there may be scenarios where you need to control whether the connector shape or process flow run should fail/continue based on information returned from the connection request. To achieve this, you can apply a response script to your connector shape.
When a response script is applied to a connector shape, the script runs every time a connection is attempted. The script receives the response code, headers, and body from the request and - utilising response_code
actions - returns a value determining whether the connector shape/flow run continues or stops.
Response scripts are just like any other custom script, except they receive additional information from the request - see lines 11 to 14 in the example below:
To implement a response script, you should:
Response scripts are written and deployed in the usual way, via the custom scripts option. However, two additional options can be used for scripts that you intend to apply via connector shapes: response_code and message.
These options are only valid when the script is applied to a connector step as a response script
.
The response_code
determines how the process flow behaves if a connection request fails. Supported response_code values are:
The message
is optional. If supplied, it is output in the run logs.
To apply your response script, access settings for the required connector shape and select your script from the response script dropdown field.
Here we handle the scenario where a connection response appears OK because the status
code received is 200
, but in fact the response body
includes a string (Invalid session
) which contradicts this. So, when this string is found in the response body, we want to retry the process flow.
In this case we return a response_code
of 2
with an message
of Invalid session
:
Here we show how the payload
received from a connection request is checked for an order number and an order status - retrying the the process flow if a particular order status is found:
There may be times when you need to define a filter based on incoming data matching one of many given values. Conversely, you might want to define a filter based on incoming data NOT matching one of many given values. This can be achieved using the following operators in string-type filters:
Using these operators, you can specify a comma-separated list of values that a record must have/not have in a string-type field, to be a match.
This information applies wherever filters are available, not just the filter shape.
These operators are designed to work with string-type fields only.
The contains one of many operator is used to match incoming records if the value of a given field DOES match any item from a provided (comma delimited) list of values. For example, consider the following payload of customer records:
Suppose you only want to process customer records with a European country code in the country
field (which is a string
type field).
You can add a filter for the country
field and select the contains one of many
operator - then provide a comma-separated list of acceptable country codes as the value
:
The resulting payload would only include records where the country
field includes one of the specified values - i.e.:
The does not contain one of many operator is used to match incoming records if the value of a given field DOES NOT match any items from a provided (comma delimited) list of values. For example, consider the following payload of customer records:
Suppose you only want to process customer records that do NOT have US or AU in the country
field (which is a string
type field).
You can add a filter for the country
field and select the does not contain one of many
operator - then provide a comma-separated list of unacceptable country codes as the value
:
The resulting payload would only include records where the country
field does NOT include one of the specified values - i.e.:
When defining your 'many values' list as the value
for a contains one of many
or a does not contain one of many
filter, there are a couple of things to keep in mind:
It's important that your 'many values' are specified as a comma-separated list - so in our example:
...will match as required, but:
...will NOT match as required.
Any spaces included in your 'many values' list ARE considered when matching. For example, consider our original payload:
Suppose we are using the contains one of many
operator to match all European countries and the value field is defined as below:
Notice that the final IE
list item is preceded by a space. This means that our filter will only match the IE
country code if it's preceded by a space in the payload, so the output would be:
Our IE
record (line 8 in the payload) isn't matched because there's no space before the country code in the country
field.
You can use the assert shape is typically used for testing purposes -you can define a static payload which is used to validate that the current payload (i.e. the payload generated up to the point that the assert shape is encountered) is as expected.
To view/update the settings for an existing assert shape, click the associated 'cog' icon:
This opens the options panel - for example:
To configure an assert shape, all you need to do is paste the required payload and save the shape for example:
An assert shape can only be saved if a payload is present. If you add an assert payload shape but don't have the required payload immediately to hand, you can just enter {}
and save.
The Patchworks FTP connector is used to work with data via files on FTP servers in process flows. You might work purely in the FTP environment (for example, copying/moving files between locations), or you might sync data from FTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an FTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different FTP server location
This guide explains the basics of configuring a connection shape with an FTP connector.
When you add a connection shape and select an FTP connector, you will see that two endpoints are available:
Here:
FTP GET
is used to retrieve files from the given server (i.e. to receive data)
FTP PUT
is used to add/update files on the given server (i.e. to send data)
Having selected either of the two FTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
For information about these fields please see our Configuring SFTP connections page - details are the same.
The flow control shape can be used for cases where you're pulling lots of records from a source connection, but your target connection needs to receive data in small batches. Two common use cases for this shape are:
A target system can only accept items one at a time
A target system has a maximum number of records that can be added/updated at one time
The flow control shape takes all received items, splits them into batches of your given number, and sends these batches into the target connection.
Step 1 In your process flow, add the flow control shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be split originates - for example:
Step 3 Move down to the batch level field and select that data element that you are putting into batches. For example :
The data structure in this dropdown field is pulled from schema associated with the source. If your data is received from a non-connector source (e.g. manual payload, API, webhook, etc.) then you can toggle ON the manual input option and enter the data path manually).
Step 4 In the batch size field, enter the number of items to be included in each batch. For example:
Step 5 By default, the payload format is auto-detected but you can set a specific format here if you prefer:
Step 6
If you're creating batches of one record, you can toggle ON the Do not wrap single records in an array
option if you want the output to be this:
...rather than this:
Step 7 Save the shape. Now when you run this process flow, data will be split into batches of your given size.
If you check the payload for the flow control step after it has run, you'll see that there's one payload for every batch created. For example:
When a connector is built, default filters can be applied at the API level, so when a process flow pulls data, the payload received is has already been refined.
However, there may be times where you want to apply additional filters to a payload that's been pulled via a - for example, if the API for a connector does not support particular filters that you need.
The filter shape works with a source payload. As such, it should be placed AFTER a in process flows.
When specifying a filter value, the maximum number of characters is 1024.
To view/update the settings for an existing filter shape, click the associated 'cog' icon:
Follow the steps below to configure a filter shape.
Step 1 Select a source integration and endpoint to determine where the incoming payload to be filtered originates.
Step 2 Click the add new filter button:
Step 3 Filter settings are displayed:
From here, you can select a field (from the data schema associated with the source endpoint selected in step 1) - for example:
Alternatively, you can toggle the manual input option to ON and add a manual path.
Step 4 Use remaining operator, type and value options to define the required filter.
Step 5 Use the keep matching? toggle option to choose how matched records should be treated:
Here:
If the keep matching? option is toggled OFF, matched records are removed from the payload before it moves on down the flow for further processing.
If the keep matching? option is toggled ON, matched records remain in the onward payload, and all non-matching records will be removed.
Step 6 Click the create button to confirm your settings.
Step 7 The filter is added to the filter shape - you can now add more filters if needed:
When defining a filter, you can choose from the following types:
You can use the manual payload shape to define a static payload to be used for onward processing. For example, you might define an email template that gets pushed into an email connection, or you might want to test a process flow for a connector that's currently being by your development team.
The maximum number of characters for a single payload is 100k. Anything larger than this may cause the process flow to fail.
Any text-based data format is supported (JSON, XML, CSV, plain)) however, keep in mind that subsequent shapes in the flow may only support JSON and XML.
To view/update the settings for an existing manual payload shape, click the associated 'cog' icon:
To configure a manual payload shape, all you need to do is paste the required payload and save the shape for example:
A manual payload shape can only be saved if a payload is present. If you add a manual payload shape but don't have the required payload immediately to hand, you can just enter {}
and save.
The connector shape is used to define which connector should be used for sending or receiving data, and then which endpoint.
All connectors have associated endpoints which determine what entity (orders, products, customers, etc.) is being targeted.
Any connector instances that have been for your are available to associate with a connector shape. Any endpoints configured for the underlying connector will be available for selection once you've confirmed which instance you're using.
If you need more information about the relationship between connectors and instances, please see our page.
When you add a connector shape to a process flow, the is displayed immediately, so you can choose which of your connector instances to use, and which endpoint.
To view/update the settings for an existing connector shape, click the associated 'cog' icon to access the - for example:
Follow the steps below to configure a connector shape.
Step 1 Click the select a source integration field and choose the instance that you want to use - for example:
Step 2 Select the endpoint that you want to use - for example
All endpoints associated with the parent connector for this instance are available for selection.
Step 3 Depending on how your selected endpoint is configured, you may be required to provide values for one or more variables.
Step 4 Save your changes.
Step 5 Once your selected instance and endpoint settings are saved, go back to edit settings:
Now you can access any optional filter options that are available - for example:
Available filters and variables - and whether or not they are mandatory - will vary, depending on how the connector is configured.
Step 6 The request timeout setting allows you to override the default number of seconds allowed before a request to this endpoint is deemed to have failed - for example:
Step 7 Set error-handling options as required. Available options are summarised below:
Step 8 Set the payload wrapping option as appropriate for the data received from the previous step:
This setting determines how the payload that gets pushed should be handled. Available options are summarised below:
Step 9 If required you can set response handling options:
These options are summarised below:
Step 10 Save your changes.
When adding filters for a string-type field, there may be times when standard operators can't achieve what you need. For example, when defining multiple filter conditions, the default condition is AND - so ALL specified filters must be met for a match. But what if you need an OR condition?
For more complex filtering requirements, regex can be used.
This information applies wherever filters are available, not just the filter shape.
Let's take the following payload:
Suppose we want to retrieve any items where the value of the fruit
field contains peaches
OR apples
. We might be tempted to add two string-type filters in the filter shape:
However, this wouldn't return any matches because we'd be looking for any records where peaches
AND apples
are present. Instead, we can define one string-type filter with regex:
The Patchworks SFTP connector is used to work with data via files on SFTP servers in process flows. You might work purely in the SFTP environment (for example, copying/moving files between locations), or you might sync data from SFTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an SFTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different SFTP server location
This guide explains the basics of configuring a connection shape with an SFTP connector.
Guidance on this page is for SFTP connections however, they also apply for FTP.
When you the Patchworks SFTP connector from the and then , you'll find that two authentication methods are available:
Auth method | Summary |
---|
When you add a connection shape and select an SFTP connector, you will see that two endpoints are available:
Here:
SFTP GET UserPass
is used to retrieve files from the given server (i.e. to receive data)
SFTP PUT UserPass
is used to add/update files on the given server (i.e. to send data)
You may notice that the PUT UserPass
endpoint has a GET
HTTP method - that's because it's not actually used for SFTP. All we're actually doing here is retrieving host information from the connector instance - you'll set the FTP action later in the endpoint configuration, via an ftp command
settings.
Having selected either of the two SFTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
These fields are summarised below:
In this scenario, we can't know the literal name of the file(s) that the SFTP PUT UserPass
endpoint will receive. So, by setting the path
field to {{original_filename}}
, we can refer back to the filename(s) from the previous SFTP connection step.
The {{original_path}}
variable is used to replicate the path from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
The {{current_path}}
variable is used to reference the filename within the current SFTP connection step.
For example, you might want to move existing files to a different SFTP folder. The rename
FTP command is an efficient way to do this - for example:
Here, we're using the FTP rename
command to effectively move files - we're renaming with a different folder location, with current filenames:
rename:store1/completed_orders/{{current_filename}}
The following four lines of code should be added to your script:
Our example is PHP - you should change as needed for your preferred language.
The path in your SFTP connection shape should be set to:
Much of the information above focuses on scenarios where you are working with files between different SFTP locations. However, another approach is to take the data in files from an SFTP server and sync that data into another Patchworks connector.
When a process flow includes a source connection for an SFTP server (using the SFTP GET UserPass
endpoint) and a non-SFTP target connector (for example, Shopify), data in the retrieved file(s) is used as the incoming payload for the target connector.
If multiple files are retrieved from the SFTP server (because the required path in settings for the SFTP connector is defined as a regular expression which matches more than one file), then each matched file is put through subsequent steps in the process flow one at a time, in turn. So, if you retrieve five files from the source SFTP connection, the process flow will run five times.
For information about working with regular expressions, please see the link below:
Value | Notes |
---|---|
This opens the - for example:
The manual data path field supports .
Presentation of the value field is dependant upon your selected . For example, if the type field is set to specific date, you can pick a date for the value:
When defining a value, you can include , , and variables.
Don't forget that when a payload is running you can - this is a great way to check that your filter is refining data as expected.
Type | Expected value |
---|
This opens the - for example:
All connector configured for your company are available for selection. Connectors and their associated instances are added via the .
The default setting is taken from the underlying connector endpoint setup and should only be changed if you have a technical reason for doing so, or if you receive a .
Option | Summary |
---|
Option | Summary |
---|
Option | Summary | Endpoint method |
---|
Further information on these authentication methods can be found on our page.
Option | Summary |
---|
If you're processing files between SFTP server locations, the {{original_filename}}
variable is used to reference filenames from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
This handles cases where you're taking action with files/data processed by a previous which is configured to use the SFTP GET UserPass
endpoint and retrieve files matching a regular expression path
.
This handles cases where you're taking action with files/data processed by a previous which is configured to use the SFTP GET UserPass
endpoint to retrieve files matching a regular expression path
and you want to replicate the source path in the target location.
A fairly common requirement is to create folders on an SFTP server which are named according to the current date. This can be achieved using a , as summarised below.
The data
object in the contains three items: payload
, meta
, and variables
.
Our creates a timestamp, puts it in to the meta
, and then puts the meta
into the data
.
The SFTP shape always checks if there is an original_filename
key in the meta
and if one exists, this is used.
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a pending
folder:
The regular expression is explained below:
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
Our aim is to copy files retrieved from an FTP location in the first connection step, to a second FTP location, using the same folder structure as the source.
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a store1
folder:
The path is added as a regular expression, explained below:
0
Continue
1
Fail the connector step and retry. The connector step is marked as failed and the queue will attempt it again.
2
Fail the process flow and queue it to retry. The process flow is marked as failed and queued for a retry.
3
Fail the process flow and do not retry.
Retries | Sets the number of retries that will be attempted if a connection can't be made. You can define a value between |
Backoff | If you're experiencing connection issues due to rate limiting, it can be useful to increase the You can define a value between |
Allow unsuccessful statuses | If you want the process flow to continue even if the connection response is unsuccessful, toggle this option |
String | A text string - for example : |
String length |
Number | A number - for example: |
Specific date | A day, month and year, selected from a date picker. |
Dynamic date | Specify a date/time which is relative to a +/- number of units (seconds, minutes, hours, days, months, years). For example: |
Boolean |
Null comparison | A field is |
Variable | Designed specifically for cases where you are comparing a variable value as the filter comparison. When selected, a |
Raw | Push the payload exactly as it is pulled - no modifications are made. |
First | This setting handles cases where your destination system won't process array objects, but your source system sends everything (even single records) as an array. So, When multiple records are pulled, they are written to the payload as an array. If you then strip out a single record to be pushed, that single record will - typically - still be wrapped in an array. Most systems will not accept single records as an array, so we need to 'unwrap' our customer record before it gets pushed. |
Wrapped | This setting handles cases where your destination system is expecting a payload to be wrapped in an array, but your payload contains a series of 'unwrapped' objects. The most likely scenario for this is where you have a complex process flow which is assembling a payload from different routes. Setting payload wrapping to wrapped will wrap the entire payload as an array object. So, |
Save response AS payload | Set this option to | POST PUT PATCH DELETE |
Save response IN payload | Set this option to | GET POST PUT PATCH DELETE |
Expect an empty response | Set this option to | POST GET |
This will return the correct payload - i.e. any records where peaches AND apples are present:
FTP command | A valid FTP command is expected at the start of this field (e.g. get, put, rename, etc.). If required, qualifying path/filename information can follow a given command. For example:
|
Root | This field is only needed if you are specifying a regular expression in the subsequent |
Path | If the name of the file that you want to target is static and known, enter the full path to it here - for example:
If the name is variable and therefore unknown, you can specify a regular expression as the |
Original filename |
Original path |
User pass | The instance is authenticated by providing a username and password for the SFTP server. |
Key pass | The instance is authenticated by providing a private key (RSA |
Mappings are at the heart of Patchworks.
When we pull data from one system and push it into another, it’s unlikely that the two will have a like-for-like data structure. By creating mappings between source and target data fields, the Patchworks engine knows how to transform incoming data as needed to update the target system.
The illustration below helps to visualise this:
In process flows, the map shape is used to define how data fields pulled from one connector correlate with data fields in another connector, and whether any data transformations are required to achieve this.
If your organisation has in-house development expertise and complex transformation requirements, you can use our custom scripts feature to code your own scripts for use with field mappings.
The map shape includes everything that you need to map data fields between two connections in a process flow. When you start to create mappings, there are two approaches to consider:
Having added a map shape to a process flow, click the associated 'cog' icon to access settings:
For more information about working with these settings, please see our Working with field mappings page.
The generate automatic mapping option is used to auto-generate mappings between your selected source and target connections.
All Patchworks prebuilt connectors (i.e. connectors installed from the Patchworks marketplace) adopt a standardised taxonomy for tagging common fields found in data schemas for a range of entity types (customers, orders, refunds, products, fulfillments, etc.). So, if your process flow includes connections to sync data between two prebuilt connectors, it's highly likely that auto-generating mappings will complete a lot of the work for you.
Once auto-generation is complete, mapping rows are added for all fields found in the source data - for example:
Where matching tags are found, the mapping rows will include both source and target fields (you can adjust these manually and/or apply transformations, as needed).
Any fields found in the source data which could not be matched by tag are displayed in partial mapping rows, ready for you to add a target manually.
For more information about using the generate automatic mapping feature, please see our Working with field mappings page.
If your process flow includes custom connections (i.e. connectors that have been built by your organisation, using the Patchworks connector builder), you can still use the generate automatic mapping option. The success of this will depend on whether field tagging was applied to your connector during the build:
If yes, your custom connector will behave like any of our prebuilt connectors when it comes to auto-generated mappings, adding fully mapped rows for all matched tags.
If no, Patchworks won't be able to match any source fields to a target automatically - partial mapping rows are added for all source fields found, ready for you to add a target manually.
It's very easy to add individual mapping rows manually, using the add mapping rule option:
We recommend that you always try the automatic mapping option first and then manually add any extra rows if needed. However, there's no reason that you couldn't add all of your mappings manually if preferred.
For more information about adding mappings manually, please see our Working with field mappings page.
This page provides guidance on using the map shape to configure field mappings between two connections.
Step 1 Click the source endpoint option:
...source and target selection fields are displayed:
Step 2 Use source and target selection fields to choose the required connector instance and associated endpoints to be mapped - for example:
Step 3 Click the generate automatic mapping button:
...when prompted to confirm this operation, click generate mapping:
As we're configuring a new map shape, there's no danger that we would overwrite existing mappings. However, always use this option with caution if you're working with an existing map shape - any existing mapping rules are overwritten when you choose to generate automatic mappings.
If you need to access the generate automatic option for an existing mapping shape, you need to click into the source and target details first.
Step 4 Patchworks attempts to apply mappings between your given source and target automatically. A mapping rule is added for each source data field and, where possible, a matched target field - for example:
From here you can refine mappings as needed. You can:
Step 5
Toggle wrap input payload
and wrap output payload
options ON/OFF as required.
Where:
wrap input payload
ON. Wraps the incoming payload in an array [ ]
ONLY for processing within the map shape.
wrap output payload
ON. Wraps the outgoing payload in an array [ ]
ONLY for onward processing.
Click options below for payload examples showing how these options work in practice:
Step 6 Save changes.
You can add as many new mapping rules as required to map data between source and target connections.
There may be times where you don't want to (or can't) use the payload fields dropdown select a field from your source/target data schema. In this case, you simply select the manual input field and enter the full schema path for the required field.
You can change the display name and/or the field associated with the source or target for any mapping rule.
If you've used the automatically generate map option to generate an initial set of mappings, you may find that some source fields could not be auto-mapped. In these cases, a mapping rule is added for each un-mapped source field, so you can either add the required destination or delete the rule.
If required, you can map a source field to multiple target fields - for example, you might need to send a customer order number into two (or more) target fields.
Sometimes it can be useful to map multiple source fields to a single target field. For example, you might have a target connection which expects a single field for 'full name', but a source connection with one field for 'first name' and another field for 'surname'.
In this case, you would define mappings for the required source and target fields, then add a transform function to concatenate the two source fields.
When you choose to delete a mapping rule, it's removed from the list immediately. However, the deletion is not permanent until you choose to save the mapping shape.
The custom dynamic date transform function is used to set a target field to the current date and time, based on the date and time that the process flow runs. You can also define rounding and adjustments.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom dynamic date:
Step 5 Optionally, you can add adjustment settings - for example:
These options are summarised below:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom dynamic date will be mapped to the given target field.
If you're building multiple process flows with similar requirements for field mappings, you can export the configuration for a map shape, and then import that configuration into another map shape.
When a map shape configuration is exported, a JSON file is generated and saved (automatically) to the default download folder for your browser. All field mappings and associated transformations are exported. You can then import this file to any other map shape within:
The same process flow
Other process flows for your company profile
Other process flows for any of your linked company profiles
To export the configuration for a map shape, follow the steps below:
Step 1 Access the required process flow, then click the settings icon for the map shape that you want to export:
Step 2 Click the export map button:
Step 3
The configuration is exported and saved to your default downloads folder. The filename is always map.json
.
To import a mapping configuration into a map shape, follow the steps below:
Step 1 Access the required process flow, then click the settings icon for the map shape that you want to update:
You can import a mapping configuration into a new map shape, or into an existing one. If you import a configuration into an existing map shape, any existing mappings will be overwritten.
Step 2 Click the import map button:
Step 3 Navigate to the downloaded map configuration file on your local drive, then select it.
The default filename for exported map configuration files is map.json
.
Step 4 Having selected a valid mapping configuration file to import, the import completes immediately.
Field transformations can be defined to change the value of a data field pulled from a source system before it is sent to its target. A transformation is comprised of one or more functions.
This page explains how to add a new transformation for a field mapping, and how to remove functions and transformations.
For a summary of available transform functions please see the Field mapping transformation function reference section.
For information about adding a field transformation using a cross-reference lookup table, please see our cross-reference lookups section.
To add a new transformation for a field mapping, you start by adding a new transformation and then build the required functions. To do this, follow the steps below:
Step 1 Access the required process flow and then edit the map shape to be updated with a transform:
Step 2 Find the mapping that you want to update, then click the transform icon (between source and target elements). For our example, we're going to add a prefix to the 'id' field:
Step 3 Click the add transform button:
Step 4 Use the select a function field to choose the type of function that you need to use (functions are organised by type):
Step 5 Depending on the type of function you select, additional fields are displayed for you to complete. Update these as required - for our example we're adding some text to be added as a prefix:
Step 6 Now we need to confirm which source field this transform field should be applied for - click the add field button:
Step 7 Select the required field:
In straightforward scenarios, this will typically be the same source field as defined for the mapping row. However, more complex scenarios may prompt multiple options here - for example, if you apply multiple transforms to the same mapping.
Step 8 Accept your changes.
Step 9 Add more fields if necessary.
Step 10 When you're satisfied that all required fields have been added, accept changes and then save the shape settings.
The array join transform function is used to join elements of an array as a string, with a user-defined delimiter. For example, you might have product data in an array:
...and need to convert these items to a string before pushing to a single destination field (with each item in the string delimited by a character of your choice):
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select join from the array category:
Step 5 In the delimiter field, enter the character that you want to use to delimit each array field in the string:
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom number transform function is used to map a given number to a target field.
If you've added/updated a before, you'll be used to selecting a source field and a target field. However, when a custom number transformation is used we don't select a source field - the custom number transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom number:
Step 5 Move down to the custom number field and enter your required number - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom number will be mapped to the given target field.
The cast to string transform function is used to change the data type associated with a source field from number
to string
. For example, you might have an id
field in a source system that's stored as string
value, but your destination system expects the id
to be a number
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to string from the number category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The format transform function is used to change a date value to a different format. For example:
...might be changed to:
A range of predefined date formats is available for selection, or you can set your own .
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select format from the date category:
Step 5 Click in the format field to select a predefined date format that incoming dates should be converted to:
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The following characters are commonly used to specify days in custom format dates.
The following characters are commonly used to specify months in custom format dates.
The following characters are commonly used to specify years in custom format dates.
The following characters are commonly used to specify times in custom format dates.
Unix Epoch dates must be received as a number, not a string - i.e.:
1701734400
rather than "1701734400"
The round date transform function is used to round source dates to either the start or end of the day, where:
start of day
changes the time to 00:00
for the received date
end of day
changes the time to 23:59
for the received date
So, you can round a given source date before sending the rounded value into a given target field.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select round date:
Step 5 Choose your required rounding:
Step 6 Accept your changes and save the transformation - at this point your mapping row is displayed without a target. From here, you can go ahead and add a target field:
The custom static date transform function is used to set a target field to a given date and time.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom static date:
Step 5 Click anywhere in the date field, or click the calendar icon, to open a date picker:
Step 6 Set the required date and time.
Step 7 Accept your changes:
...then save the transformation:
Step 8 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 9 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom static date will be mapped to the given target field.
The cast boolean to string transform function is used to change the data type associated with a source field from boolean
to string
.
A boolean
data type can have only two possible states: true
or false
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast boolean to string from the other category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The round number transform function is used to change the number of decimal places for a number value. For example:
...might be changed to:
With the round number transform you can specify the number of decimal places that should be applied to incoming numeric values.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select round number from the number category:
Step 5 Move to the decimal places field and enter the required number of decimal places required for transformed values - for example:
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The cast to boolean transform function is used to convert given values to true
/false
based on PHP logic, but with the option to define overrides for specific input values. For example, consider the following payload:
Suppose we want to define a mapping for the fruit_included
item (line 2) but convert the numeric value to a true
/false
(boolean) value. Without an override setting, transforming the number to a boolean value would result in the following payload:
This is because standard PHP logic determines that 0
equates to a boolean false
value. But in this specific case, a quirk in our source data is such that the 0
value actually means true
- hence a list of fruits follows.
So, we need a way to specify an override for this field, where 0
equates to true
. We can do this via the other > cast to boolean transform function, thereby achieving the desired result:
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to boolean from the other category:
Step 5
In the true values (override)
and/or false values (override)
fields, enter specific values that should override standard PHP logic:
Multiple override values can be entered - use a comma to separate each one.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The math transform function is used to perform a mathematical operation for selected fields. For example, your incoming payload might include customer records, each with a series of numeric value
fields that need to be added together so the total can be pushed to a total
field in the target system.
The following mathematical operations are available:
Add
Subtract
Multiply
Divide
In the instructions below, we'll step through the scenario mentioned above where our incoming payload includes customer records, each with three value fields (value1
, value2
, value3
) that must be added together and pushed to a total
field in the target system.
The steps required are detailed in two stages:
To begin, we need to update/add the required mapping row so that it includes all source fields that need to be added together and then pushed to the target.
Step 1 In your process flow, access settings for your map shape:
Step 2
Find (or add) the mapping row which requires a math transformation. In the example below, we have a row that's currently set to map the source first name
field into the destination full name
field:
Step 3 On the source side of the mapping row, we need to include all the fields to be used in our mathematical operation. To do this, click the 'pencil' icon associated with the existing source field:
Step 4 Details for the selected field are shown - click the add source field button:
Step 5 Click the 'pencil' icon associated with the new source field:
Step 6 Move down and update the display name and payload fields for the second source field that you want to use - for example:
Step 7 Accept these changes to exit back to your mapping rows - notice that there are now two source fields associated with the row you updated:
Step 8 Repeat steps 3 to 7 to add any more source fields that you need to include in the mathematical operation.
With all required source fields defined for our mapping row, we can add a math transform function to define the required calculation based on these fields.
Step 1 Select the add transform button for the required mapping rule - for example:
Step 2 Click the add transform button:
Step 3 Click in the name field and select math from the number section in the list of transform functions:
...math options are displayed:
Step 4 Click in the operator field and select the type of calculation to be performed - you can choose from add, subtract, multiply and divide:
Step 5 Click the add field button:
Step 6 Click in source fields and select the first source field to be used in the calculation:
Step 7 Accept your changes.
Step 8 Click the add field button again:
...and add the next source field to be used - for example:
Step 9 Accept your changes.
Step 10 Repeat steps 8 and 9 to include any more source fields to be used in the calculation. Each time you accept a new source field you'll see the sequence that they will be processed when this transform function runs - for example:
Fields are processed in the sequence that they are added here.
Step 11 Having added all required source fields to be calculated, accept changes:
...then save the function:
Step 12 Ensure that the target field for this mapping row is set as required, then save the map shape. Next time the process flow runs, the mathematical operation will be performed for the given source fields and the total value is pushed to the defined target field. The example below shows an incoming payload before and after the math transformation is applied:
The script transform function is used to apply an existing to the source value, and the updated field value is pushed to the target field.
Make sure that the required script has been before applying it as a transform function.
Any payloads passed in and out of a script transform are verbatim - there is no JSON encoding.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select script:
Step 5 Click in the script version field and select the script/version that you want to apply for this field transformation - for example:
All scripts and versions are available for selection. If you choose a script/version that is not currently deployed, it will be deployed automatically.
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select/update the target field and then the mapping in the usual way.
Note that when a string
type is selected, you can choose the regex operator and define a regex value. This provides greater flexibility if you can't achieve the desired results using standard operators. preg_match
is used for pattern matching. For an example, see .
You can also match a string
-type field according to whether or not the value is one of many items in a given list. For more information, see .
A number which represents the expected string length for the received payload. Here, the 'payload' might refer to a targeted field within the incoming payload, or the entire payload.
For example, if you want to ensure that an objectId
field is never empty, you would define a filter for objectId
greater than
0
:
In this case, toggling the keep matching
option to OFF means that the ongoing payload will include only items where this field is not empty. Conversely, toggle this option on if you want to pass on empty payloads for any reason.
You can use the same principle to check for empty payloads (as opposed to a specific field). In this case you would define a filter for *
greater than
0
:
It's important to be aware that relative date/time variables are affected by the .
A true or false value. For example, if you only want to consider items where an itemRef
field is set to true
, you would define a filter for itemRef
equals
true
:
Generally, if your process flow is pulling from a source connection but later pushing just a into a destination connection, you should set payload wrapping to first.
This option provides the ability to access the response body via . This can be useful for cases where an API returns a successful response despite an error - by inspecting response information from the payload itself, you can determine whether or not a request is successful.
By default, the response is saved in a field named response
- for example:
However, when the save response in payload
option is toggled on
, you can specify your preferred field name - for example:
When specifying a path to a given folder in this way, you don't need a /
at the start or at the end.
This field is not currently used. For information about working with original filenames please see the section below.
This field is not currently used. For information about working with original paths please see the section below.
Any instances defined for your company profile are available to select as the source or target. If you aren't using a connector to retrieve data (for example, you are sending in data via the Inbound API or a webhook), you won't select a source endpoint - instead, use the override source format dropdown field to select the format of your incoming data:
Field | Summary | Example |
---|---|---|
Transform | Description |
---|
Transform | Description |
---|
Transform | Description / Notes |
---|
Internally, the format transform function uses Laravel's date format methods, which in turn call PHP date format methods. Commonly used format specifiers are listed below - full details are available in .
Specifier | Summary |
---|
Specifier | Summary |
---|
Specifier | Summary |
---|
Specifier | Summary |
---|
If your Unix dates are provided as strings, you should convert these to numbers. To achieve this, add a transform for the date field BEFORE the date format transform function.
In our example, our source data is coming in via a manual payload so are defining the payload field manually - if you're using a to receive data, you'll be able to select the required field from the associated schema for your connection.
Step 9 Go to .
All source fields that were added for this mapping in will be available for selection here.
For a list of commonly used date specifiers that can be used in custom formats, see .
Round
Select start of day
to change the time to 00:00:00
for the date the process flow is run.
Select end of day
to change the time to 23:59:59
for the date the process flow is run .
Suppose that the process flow runs at the following date/time: 2023-08-10 10:30:0
If set to start of day
, the transformed value would be 2023-08-10 00:00:00
.
If set to end of day
, the transformed value would be 2023-08-10 23:59:59
.
Units
If you want to adjust the date/time, select the required unit - choose from second
, minute
, hour
, day
, week
, month
or year
.
Suppose that the process flow runs at the following date/time: 2023-08-10 10:30:0 and you want to adjust it by 1 day.
In this case, you would select day
as the unit
and specify 1
as the adjustment
value. However, if you wanted to adjust by 1.5 days, you would set the unit
to hour
and specify 36 as the adjustment
value.
Adjustment
Having selected an adjustment unit, enter the required number of that unit here.
See units
examples above.
d | Day of the month with leading zeros (01 to 31). |
j | Day of the month without leading zeros (1 to 31). |
D | A textual representation of a day in three letters (Mon to Sun) |
l | A full textual representation of the day of the week (Monday to Sunday) |
m | Numeric representation of a month with leading zeros (01 to 12). |
n | Numeric representation of a month without leading zeros (1 to 12). |
M | A textual representation of a month in three letters (Jan to Dec) |
F | A full textual representation of a month (January to December). |
Y | Four-digit representation of the year (e.g. 2023). |
y | Two-digit representation of the year (e.g. 23). |
H | Hour in 24-hour format with leading zeros (00 to 23). |
i | Minutes with leading zeros (00 to 59). |
s | Seconds with leading zeros (00 to 59). |
a | Lowercase Ante meridiem (am) or Post meridiem (pm). |
A | Uppercase Ante meridiem (AM) or Post meridiem (PM). |
The contains one of many transform function is used to match string values against a (comma delimited) list that you want to include.
For details, please see: Using contains one of many or does not contain one of many for string filters.
The cache lookup transform function is used to lookup and use data from a cache created earlier in the flow.
If caches have been added to your current process flow or company-level caches have been added for use in any other process flows, you can reference these in field mapping transformations.
For details please see Referencing a cache in mapping transformations in our cache section.
The does not contain one of many transform function is used to match string values against a (comma delimited) list of values that you want to exclude.
For details, please see: Using contains one of many or does not contain one of many for string filters.
Join elements of an array as a string, based on a defined delimiter. |
Apply the date that a process flow runs, with or without adjustments. |
Apply a static date. |
Convert a date to a predefined or custom format. |
Round a date up/down to the start/end of the day. |
Time now | Returns the current date and time in your required format. |
Timezone | Convert dates to a selected timezone. |
Cast to boolean | Convert a number value to a boolean (true/false) value based on PHP logic. |
Change the source field data type from |
Ceiling | Round up to the nearest whole number. |
Apply a static number. |
Floor | Round down to the nearest whole number. |
Make negative | Convert number to a negative. |
Make positive | Convert number to a positive. |
Perform a mathematical operation for selected fields. |
Change the number of decimal places. |
Define override values for conversion to true/false. |
Change the source field data type from |
Reference a value from cached data. |
Convert weight | Convert a specified weight unit to a given alternative. |
Apply a true or false value. |
Fallback | Set a default value to be used if the given input is empty. Blank values are supported. |
Map |
Convert a null value to an empty string. |
Convert a null value to zero (0). |
Convert a source value to null. |
Change the source field data type from |
Change the source field data type from |
Join selected fields with a selected character. |
Specify a comma separated list of field values to be matched for inclusion. |
Convert string to boolean | Convert a string value to a boolean (true/false) value based on PHP logic. |
Country code | Apply country codes of a selected type (Alpha 1, Alpha 2, Numeric). Note that this transform will cause the map step to fail if an empty value is received. |
Country name | Return the country name for a country code. Note that this transform will cause the map step to fail if an empty value is received. |
Apply static text or reference variables. |
Specify a comma separated list of field values to be matched for exclusion. |
Get the first word from a string. |
Hash | Convert a string to a SHA1 Hash. |
Encode data into JSON format. Note that although this is listed as a |
Get the last word from a string. |
Limit | Truncates a string to a given length. |
Lowercase | Convert to lowercase. |
Pad an existing string of characters to a given length, using a given character. |
Prefix | Add a string to the beginning of a field. |
Replace any given character(s) with another character. |
Split elements of a string into an array. |
Substring | Return a given number of characters from a given start. |
Suffix | Add a string to the end of a field. |
Trim whitespace | Remove any characters around a string. |
Uppercase | Convert to uppercase. |
URL decode | Convert an encoded URL into a readable format. |
URL encode | Convert a string to a URL encoded string. |
The null value transform function is used to replace the value of a source field with null.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select null value from the other category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The concatenate transform function is used to join the values for two or more source fields (using a given joining character) and then map the output of this transformation to a destination field.
For example, you might have a source system that captures the first name
and last name
for customer records, and then a destination system that expects this information in a single name
field.
In the instructions below, we'll step through the scenario mentioned above where our incoming payload includes the first name
and last name
for customer records, but our destination system expects this information in a single full_name
field. The steps required are detailed in two stages:
To begin, we need to update/add the required mapping row so that it includes all source fields that need to be joined and then pushed to the specified destination.
Step 1 In your process flow, access settings for your map shape:
Step 2
Find (or add) the mapping row which requires a concatenate transformation. In the example below, we have a row that's currently set to map the source first name
field into the destination full name
field:
Step 3 On the source side of the mapping row, we need to add all the fields that need to be joined. To do this, click the 'pencil' icon associated with the existing source field:
Step 4 Details for the selected field are shown - click the add source field button:
Step 5 Click the 'pencil' icon associated with the new source field:
Step 6 Move down and update the display name and payload fields for the second source field that you want to join - for example:
In our example, our source data is coming in via a manual payload so are defining the payload field manually - if you're using a connection shape to receive data, you'll be able to select the required field from the associated schema for your connection.
Step 7 Accept these changes to exit back to your mapping rows - notice that there are now two source fields associated with the row you updated:
Step 8 Repeat steps 3 to 7 to add any more source fields that you need to join.
Step 9 Go to stage 2.
With all required source fields defined for our mapping row, we can add a concatenate transform function to join the values for these fields.
Step 1 Select the add transform button for the required mapping rule - for example:
Step 2 Click the add transform button:
Step 3 Click in the name field and select concatenate from the string section in the list of transform functions:
...concatenate options are displayed:
Step 4 In the join character field, enter the character that you want to join each of your source fields - for example, a hyphen or a space:
Step 5 Click the add field button:
Step 6 Click in source fields and select the first source field to be joined:
All source fields that were added for this mapping in stage 1 will be available for selection here.
Step 7 Accept your changes.
Step 8 Click the add field button again:
...and add the next source field to be joined - for example:
Step 9 Accept your changes.
Step 10 Repeat steps 8 and 9 to add any more source fields to be joined. Each time you accept a new source field you'll see the sequence that they will be processed when this transform function runs - for example:
Fields are joined in the sequence that they are added here.
Step 11 Having added all required source fields to be joined, accept changes:
...then save the function:
Step 12 Ensure that the target field for this mapping row is set as required, then save the map shape. Next time the process flow runs, the given source fields for this mapping row will be joined and then that value is pushed to the target. The example below shows an incoming payload before and after the concatenate transformation is applied:
The null to string transform function is used to convert incoming null
values to an empty string. For example:
...is converted to:
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select null to string from the other category:
Step 5 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom boolean transform function is used to map a value of true
or false
to a target field.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when a custom boolean transformation is used we don't select a source field - the custom boolean transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom boolean:
Step 5 Move down to the value field and select your required true/false value - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the selected custom boolean value will be mapped to the given target field.
The null to string transform function is used to convert incoming null
values to an empty string. For example:
...is converted to:
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select null to zero from the other category:
Step 5 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom string transform function is used to map a given string to a target field. This string can be static, or you can reference flow variables and cached data.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when a custom string transformation is used we don't select a source field - the custom string transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom string:
Step 5 Move down to the custom string field and enter your required text or variables - for example:
For more information about referencing flow variables in a custom string, please see our Referencing flow variables in field mapping transformations page. For more information about referencing cached data in a custom string, please see our Referencing a cache in mapping transformations page.
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom string (or associated values from variables) will be mapped to the given target field.
The cast to float transform function is used to change the data type associated with a source field from string
to float
.
A float is a type of number which uses a floating point to represent a decimal or fractional value. They are typically used for very large or very small values where there are a lot of numbers after the point - for example: 5.3333 or 0.0001.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to float from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The JSON encode transform function is encode incoming values as JSON. For example, you might have product data in a string:
...and need to encode the values for pushing to the destination system:
Although this is listed as a string type transform, in fact any data type can be encoded.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select JSON encode from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes:
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The replace transform function is used to replace an existing source string value with either:
An alternative string value
An empty value
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select replace from the string category:
Step 5 Update search and replace fields with your required values:
For the replace field, you can enter another string or leave the field blank to replace the source with an empty value.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The last word transform function is used to extract the last word of an incoming string value, based on a user-defined delimiter. For example, you might have product data in a string:
...and need to extract just the last item in the string for pushing to the destination system:
In this case, items in our source string value are delimited with a comma, so we can use this to determine the last word. The transform checks incoming string values and determines the 'last word' to be the word after the last occurrence of the given delimiter.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select last word from the string category:
Step 5 In the delimiter field, enter the character that delimits elements in the string:
If you use any of the following characters, they should be escaped:
.
+
*
?
^
$
(
)
[
]
{
}
|
\
/
For example, a delimiter of *
would be entered as:
\*
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes:
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The first word transform function is used to extract the first word of an incoming string value, based on a user-defined delimiter. For example, you might have product data in a string:
...and need to extract just the last item in the string for pushing to the destination system:
In this case, items in our source string value are delimited with a comma, so we can use this to determine the first word. The transform checks incoming string values and determines the 'first word' to be the word after the first occurrence of the given delimiter.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select first word from the string category:
Step 5 In the delimiter field, enter the character that delimits elements in the string:
If you use any of the following characters, they should be escaped:
.
+
*
?
^
$
(
)
[
]
{
}
|
\
/
For example, a delimiter of *
would be entered as:
\*
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice):
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The cast to number transform function is used to change the data type associated with a source field from string
to number
. For example, you might have an id
field in a source system that's stored as number
value, but your destination system expects the id
to be a string
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to number from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The pad transform function is used to pad an existing string of characters to a given length, using a given character. You can apply padding to the left (i.e. immediately before the existing string), to the right (i.e. immediately after the existing string), or both (immediately before and after, as equally as possible).
The payload item below contains a string that's 8 characters long:
If we apply padding to a length of 20
using a *
character to the right
, the result would be:
Here, we have an extra 12 * characters to the right, giving a string length of 20. However, if we apply the *
character to both
, the result would be:
Now the padding is applied with 6 characters to the left of the original string and 6 characters to the right.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select pad from the string section:
Step 5
Click in the direction
field and select where you would like padding to be applied:
You can apply padding to the left
(i.e. immediately before the existing string), to the right
(i.e. immediately after the existing string), or both
(immediately before and after, as equally as possible).
Step 6
In the length
field, specify the number of characters that you'd like the final (i.e. transformed) string to be - for example:
Step 7
In the pad character
field, specify the character that you'd like to use for padding - for example:
If you want padding to be applied with spaces, press the space bar once in this field.
Step 8 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform.
The split string transform function is used to split elements of a string into an array, with a user-defined delimiter. For example, you might have product data in a string:
...and need to convert these items to an array before pushing to the destination system:
In this case, items in our source string value are delimited with a comma, so we can use this to determine where each split occurs. The transform checks incoming string values and determines each array item to be the word after each comma.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform icon for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select split from the string category:
Step 5 In the delimiter field, enter the character that delimits elements in the string:
If you use any of the following characters, they should be escaped:
.
+
*
?
^
$
(
)
[
]
{
}
|
\
/
For example, a delimiter of *
would be entered as:
\*
A space is a valid character.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The split shape is used to split-out a given element of a payload at a given data element. When you split data, the specified element (including any nested elements) is extracted for onward processing.
For example, your process flow might be pulling customer data from a source connection, but you need to send address details to a different endpoint. In this case, you'd use the to create two different routes, mapping just customer data down one, and splitting out addresses for the other.
Step 1 In your process flow, add the split shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be split originates:
Step 3 Move down to the level to split section and use the dropdown data path to select the required data element to split - for example:
Remember - any data (including nested data) within the selected element will be split out into a new payload.
Step 4 If required, you can add a wrapper key. This wraps the entire payload in an element of the given name - for example:
...would wrap the payload as shown below:
Step 5 Save the shape.
The track data shape is used to track processed data, based on field paths that you define. When data passes through a track data shape, the values associated with your defined field paths are tracked - which means they can be .
For example, you might want to track all customer_id
values that pass through a flow, so at any time you can quickly check if/when/how a given customer record has been processed.
Tracked data shape is for 15 days after it was last tracked.
Depending on data volumes, allow 10 minutes for tracked data to be visible in data pools.
The track data shape works with incoming payloads from a , a , an , or a .
, , and variables are supported when defining success messages.
JSON payloads are supported.
You can add as many track data shapes to a process flow as required. For example, you might place one immediately after a receiving connector to track everything received before anything else happens to the data, and another after the final sending connector to track everything sent into your destination system.
To add and configure a new track shape, follow the steps below.
Step 1 In your process flow, add the track data shape in the usual way:
Step 2 Configure the settings as required - the table below summarises available fields:
Step 3 Save your settings., then access them again:
...you'll now see success criteria options at the bottom of the shape settings drawer:
Step 4 The success criteria options are optional. If you don't need to apply these then your setup is complete - close the settings drawer and continue with your process flow as required. If you do want to use these options, see below for guidance.
When data passes through a track data shape, specified data fields are tracked and by default, tracked data is marked as a success.
However, there may be times where you want to control the conditions under which the status of tracked data is deemed a success or a failure, and to record this outcome for future reference. The success criteria section allows you to:
When tracked data is marked as failed, it is still tracked and the shape still processes successfully - for example:
In this run log, notice that tracked data is marked as a failure (1) but tracked data is stored (2) and the track data step succeeds (3).
Any conditions that you want to apply can be added via filters. To define a new filter, click the add filter button:
You can add as many filters as you need - multiple filters work together with an 'AND' operator. Remember that you're defining conditions that must be met for a success
outcome - if multiple filters are present they must ALL be matched. If one or more filters are not matched, the associated tracked data is marked as a failure
.
Filters can be based on any field(s) found in your data, irrespective of whether you chosen to track them.
No success criteria filters are defined
Success criteria filters are defined and the outcome is success
Success criteria filters are defined and the outcome is failure
All process flows must begin with a trigger - it’s this that determines when and how frequently the process flow will run. For this reason, all are created with a already in place. You should edit this shape to apply your required :
Having accessed settings for a trigger shape, you can select the required trigger type - this determines any subsequent options that are displayed:
The notify shape is used to create custom notification messages for output to run logs and email messages.
To achieve this, you compose a notification template
within the shape settings using any combination of static text and variables. When the process flow runs and hits this shape, the notification message is generated from your defined template and is then:
Output to the run logs AND/OR
Emailed to recipients in selected notification groups
Notification templates can include dynamic content from .
Email notifications are sent irrespective of whether a process flow is .
The maximum number of email notifications that can be sent (across all process flows for your company profile) is determined by your . If you manage , each one of these will have its own allowance.
Notification templates can include dynamic content from , , and variables.
Variables return the first 100 characters of the associated content.
In the example below, we use two to retrieve a store name (store_name
) and a team name (query_team
), and a (our_id
) to retrieve required information from the payload:
Step 1 In your process flow, add the notify shape in the usual way:
Step 2 Select an alert level from the dropdown field:
The selection made here determines how this notification is displayed in logs and email messages:
Email notifications always includealert level
as a status
, which can be useful if you want to define mailbox filters based on the alert level. An example message is shown below:
Step 3 Choose notification channel(s) to be used - i.e. how these notifications should be communicated:
Available options are summarised below:
Step 4
This field is not displayed if the channel is set to log in the previous step.
The email limit determines the maximum number of emails that the notify shape can send per flow run.:
For example, if you select a notification group that contains 20 recipients and set the email limit to 10, the first 10 recipients will receive emails and the remaining 10 in the group will NOT receive emails.
Step 5
This field is not displayed if the channel is set to log in the previous step.
If you want to send this notification to email recipients, select the required notification group:
Remember that the email limit defined in the previous step may limit the number of recipients to receive emails. It's a good idea to check how many recipients are in your selected notification group, and that your email limit is set appropriately.
Step 6 Add your required notification text/variables to the notification template section:
If you need more space, you can drag this field further down using the handlebar in the bottom right corner:
Step 7 Save the shape.
The tracked data page is used to view summary information for tracked data:
Data for the tracked data page is updated every 10 minutes (i.e. every 10 minutes the database is checked for new tracked items and these are pushed to the dashboard UI).
The tracked data page shows a maximum of 15 entries for any given tracked entity. So if you're tracking the same entity in more than 15 places, only the latest 15 entries are shown.
Tracked data summaries remain available for 15 days, starting from when an entity was last tracked. For example, if you're tracking customer IDs and ID 0100001
was tracked 14 days ago but then again today, it will be visible for another 15 days (and so on until it's not seen for 15 days).
Times shown on tracked data summaries are UTC.
To access the tracked data page, select process flows | tracked data from the left-hand navigation menu:
To find tracked data information for a particular field value, start by selecting the associated entity type - for example:
Next, enter the value that you want to to review - for example, if you are tracking customer IDs, you'd enter the required customer ID here:
As you type a value, the search updates instantly so you may notice that the list of available summaries changes as you type.
From here, you can click any summary to view details:
Each tracked data summary shows tracking information for the given entity in a flow run - for example:
You need to map an array within a payload but also include one of the 'parent' fields - for example, map the following:
...to:
To achieve this, the target field must be defined with double *.
characters in the data path. For example:
If you don't do this, and just enter the standard single *.
characters - for example:
....the output will be a flat object - for example:
The set variables shape is used to set values for flow variables and/or metadata variables at any point in a flow.
When defining variable values you can use:
static text (e.g. blue-003
)
(e.g. [[payload.productColour]]
)
(e.g..{{flow.variables.productColour}}
)
Step 1 In your process flow, add the set variables shape in the usual way:
Step 2 Access shape settings:
Step 3 Click the add new variable option associated with the type of variable that you want to define - for example:
Step 4 Options are displayed for you to define the required variables and their values. How these are displayed depends on the type of variable you've chosen to add:
Step 5 Once variables are accepted they're added to the settings panel (you can edit/delete as needed):
Step 6 Save the shape.
The route shape is used for cases where a process flow needs to split into multiple paths, based on a given set of conditions. Conditions are defined based on any fields found in the schema associated with your source data, so the scope for using routes is huge.
To define multiple routes for your process flow you must:
Add a route shape.
Configure the route shape to add required routes and conditions.
Build the flow for each configured route by add in the usual way.
By default, multiple routes are processed in parallel when a process flow runs.
If a route shape is added without route conditions, incoming data flows down ALL defined routes.
When you add a route shape to a process flow, the shape is added to your canvas with two placeholder route stems - for example:
Follow the steps below to configure route data for a route shape.
Step 1 Select a source integration and endpoint to determine where the incoming payload for the route shape is coming from - for example:
Step 2 Select a routing method to determine what should happen if a payload record matches conditions defined for more than one route:
These options are summarised below:
Step 3 Click the 'edit' icon associated with the first route:
Step 4 Enter your required name for this route - it's a good idea to ensure this provides an indication of the route's purpose. For example:
Step 5
If you intend to define multiple rules/filters for this route, select the required operator from the filter logic dropdown field:
Select AND
so all defined filters must apply for a match
Select OR
so any one of the defined filters will result in a match
If you only need to define one filter, just leave the default setting in place (it has no effect).
Step 6 Click the add new filter button:
Step 7 Filter settings are displayed:
From here, you can select a field (from the data schema associated with the source endpoint selected in step 1) - for example:
Step 8 Use remaining operator, type and value options to define the required filter.
When a string filter type is selected, you have the option to select the regex operator and then define a regex value. This provides greater flexibility if you can't achieve the desired results using standard operators. preg_match
is used for pattern matching.
Step 9 Use the keep matching? toggle option to choose how matched records should be treated:
Here:
If the keep matching? option is toggled OFF, matched records are removed from the payload before it moves on down the flow for further processing.
If the keep matching? option is toggled ON, matched records remain in the onward payload, and all non-matching records will be removed.
Step 10 Click the save button (at the bottom of the panel) to confirm these settings. The new rule is added for your first route - for example:
Step 11 Repeat steps 6 to 10 to add any additional rules for this route. When you've added all required rules, click the save/update button at the bottom of the panel.
Step 12 Repeat steps 3 to 11 to configure the second route.
Step 13 Add any additional routes required using the add route button. Each time you add a new route, the canvas updates with an additional route stem from your route shape.
Step 14 Save your changes.
Having defined all required routes and associated conditions, the route shape on the canvas will have the corresponding number of route stems, ready for you to add shapes. For example:
Click the + sign for each branch in turn, then add the required shapes for each route flow.
The run process flow shape is used to call one process flow from another, so you can run process flows in a chain. For example, you might have a process flow that receives data from a webhook, applies filters and then hits a run process flow shape to call another flow with that data.
The default behaviour is for the payload from the end of the calling process flow to be sent into the called process flow for onward processing. However, when configuring a run process flow shape you can add a manual payload - in this case, your manual payload will be sent into the called process flow.
The run process flow shape also allows you to choose whether any variables associated with the called process flow should be applied.
A called process flow will only run if it is .
A called process flow is always added to your for processing - even if the parent flow is triggered manually.
A called process flow does NOT inherit the of its parent - you should set the priority of these process flows individually.
If you don't configure a manual payload in the run process flow shape, the final payload from the calling process flow is always sent into the called process flow.
If multiple payloads are passed into the run process flow shape, the called process flow will run once for each payload - these runs take place in parallel.
When a payload is passed to a 'child' process flow, meta variables are included.
You cannot create a recursive process flow loop - for example, if Process Flow A calls Process Flow B, you cannot then call Process Flow A from Process Flow B.
Step 1 In your process flow, add the run process flow shape in the usual way:
Step 2 Click in the flow field and select the process flow that you want to run:
If you have a lot of process flows, you can search for the one you want to use here.
Step 3 Move down to the settings section and choose which version of the selected process flow to call:
Available options are summarised below:
Step 4 If your selected process flow is associated with any process variables, these are shown - you can choose to enable or disable these:
Step 5 If you want to pass a manual payload into this process flow, toggle the specify payload manually option ON and paste the required payload into the supplied payload field:
The manual payload can be any format - JSON, XML, plain text, etc.
Step 6 Save the shape. The configured shape is added to the canvas with the sub-flow available as a link - for example:
Click this link to open the sub-flow in a new browser tab.
Trigger schedule options are used to schedule the associated process flow to run at a specified frequency and/or time. Here, you can use intuitive selection options to define your requirements or - if you are familiar with regular expressions - use advanced options to build your own expression.
Trigger schedules are based on .
Schedules can be defined based on the following occurrences:
Define a schedule to run every x
minutes - for example:
Define a schedule to run every x
hours - for example:
Define a schedule to run on selected days of the week at a given start time - for example:
Use the every dropdown list for quick daily presets, or define custom settings:
Define a schedule to run on selected days of the month or weeks, for selected months, at a given start time - for example:
Use the every dropdown list for quick monthly presets, or define custom settings:
If you are familiar with cron expressions, you can select this option to activate the cron expression field beneath and enter your required expression directly:
Patchworks supports the 'standard' cron expression format, consisting of five fields: minute, hour, day of the month, month, and day of the week.
Each field is represented by a number or an asterisk (*) to indicate any value. For example:
0 5 * * *
would run at 5 am, every day of the week.
Extended cron expressions (where six characters can be used to include seconds or seven characters to include seconds and year) are not supported.
Follow the steps below to add a new trigger schedule.
Step 1 To add a new schedule, click the add new schedule button:
Step 3 Define your required settings for the occurrence.
Step 4 Click save to save this schedule. The schedule is added to the shape - for example:
You can add a single schedule, or multiple schedules. When you add multiple schedules, ALL of them will be active.
Convert values using a .
Apply a field-level . Note that a script will time out if it runs for more than 120 seconds.
Field | Summary |
---|
Define filter conditions that must be met for an entity's progress to be reported as a success
or failure
in for this tracked item.
Add a message to be displayed in for this tracked item.
In this context, tracked data marked as a 'failure' simply means that one or more filters defined for success were not matched and therefore this item is reported as a failure in associated .
These filters work in the same way as in the dashboard - select/define a field, then set conditions and values.
The success or failure outcome from these filters is reported in the logs, and also in - for example:
You can define a message to be displayed in the for associated tracked data:
This message can be text-only, or any combination of text, , , and . For example:
In , this example is shown as:
Messages are added to when:
Currently, , , and trigger types are available.
You can incorporate , and variables in notify messages.
Status | Display colour |
---|
Channel | Outcome |
---|
All defined notification groups are available for selection. If you need to add a new group or check recipients in an existing group, please check our page.
Remember that a notification message can include static text and (using ) dynamic content.
Here, you can to access available tracked data summaries, and then choose to view .
Tracked data summaries are available for data tracked via the , and for data tracked via ).
The entity type list includes all entity types that have been tracked (via the or via ).
are displayed for the most recent process flow runs where this entity was tracked - for example:
You'll see one entry for each occasion that this entity was tracked in this run. In our example we have two entries because the associated process flow includes two , and this entity passed through both.
Summary information varies slightly, depending on whether the data was tracked via a , or via :
Information | Track data shape | Tracked fields |
---|
To configure these routes (and add more if needed) click the 'cog' icon associated with this shape to access .
Option | Summary |
---|
Alternatively, you can toggle the manual input option to ON and add syntax for :
The manual data path field supports .
Presentation of the value field is dependant upon your selected type. When defining a value, you can include , , and variables.
If you select a process flow that is not enabled, an error will be displayed when you attempt to save these settings. In this case, you should access the process flow you want to call and it, then come back to save this shape.
Option | Summary |
---|
Step 2 Select an .
From here you can select any - for example:
Success | Green |
Info | Grey |
Warning | Orange |
Error | Red |
Email +Log | The defined notification message is sent to any specified email notification groups AND output to run logs. |
The defined notification message is sent to any specified email notification groups. It is NOT output to run logs. |
Log | The defined notification message is output to run logs. Email notification groups are not available for selection (so no emails are sent). |
Follow all matching routes | If a record matches defined conditions for multiple routes, send it for onward processing down all matched routes. |
Follow first matching route only | If a record matches defined conditions for multiple routes, send it for onward processing down the first matched route, but no more. |
Source instance Source endpoint |
Entity |
Direction |
|
Field paths | Define one or more data fields to be tracked - i.e. fields that you may want to look up in the event of a query. |
Receive / send | The flow direction for the associated tracked data, as defined in the track data shape. | The flow direction for the associated tracked data, as defined for the associated connector endpoint. |
Date & time | The date & time that this data was tracked. | The date & time that this data was tracked. |
Success / fail | Defaults to |
Message | N/A |
Latest deployed version |
Latest draft version |
Specific version | Select a version from the dropdown list. With this approach, keep in mind that this version is always called - do if you update this process flow subsequently, you will NOT be running the latest |
Using the try/catch shape, you can build your own path to handle process flow sync exceptions elegantly.
Place a try/catch shape before key steps in your flow, then configure its settings to determine behaviour when exceptions are found. Once this is done, the shape is added to the canvas with two routes - one for try
and one for catch
:
For the try
route, build your flow in the usual way to achieve the required result. For the catch
route, define a flow that should be followed for exceptions. For example, you might add exceptions to a cache (so they can be processed subsequently) and then notify specified contacts that exceptions have occurred.
The notify shape can be very powerful when used with the try/catch shape. Keep in mind that you can include meta, flow, and payload variables to define notification messages. For example:
When the process flow runs, data flows down the try
route and ideally completes without exceptions. However, if an exception is found, the associated payload is removed and sent along the catch
route.
You can add one try/catch shape per process flow
You can add one try/catch shape per process flow
If a connector needs to retry authentication, the retry is NOT caught (i.e. it's not sent into the catch
route). If re-authentication is successful the flow continues as normal (i.e. along the try
route), otherwise the process flow fails.
As noted above, if an exception is found it gets removed (as a failed payload
) and sent along the defined catch
route. For example, if a try/catch shape receives 20 payloads and finds 4 exceptions, then 4 failed payloads are sent along the catch
route.
Failed payloads can be found on the failed payloads
tab in run logs - for example:
To add and configure a new try/catch shape, follow the steps below.
Step 1 In your process flow, add the try/catch shape in the usual way:
You can add one try/catch shape per process flow. It's up to you where you place this in your flow, but it's generally a good idea to add it at the very start to ensure that all steps are checked.
Step 2 Access settings for the newly placed shape:
Step 3 Choose the action to take if an exception is encountered:
Available options are summarised below:
Step 4
Save settings to return to the canvas and build your try
and catch
routes as required.
The trigger webhook option can be used if you want to trigger a process flow whenever a given event occurs in your third-party application.
When you choose to add a webhook to a process flow trigger shape, a Patchworks webhook URL with built-in authentication is auto-generated. This URL must be added to your third-party application, so it knows where to send event data.
How you use webhooks is driven by your business requirements, and the capabilities of your third-party application. For example, your third-party application might send a webhook which includes a batch of orders to be processed in the body
, or the webhook body
might simply contain a notification message indicating that orders are ready for you to pull.
Patchworks webhook URLs are generated in the form:
For example:
The {{webhook_id}}
is a Patchworks signature which is generated as a random hash (that doesn't expire). This provides built-in authentication for our URLs however, they should still be kept private.
The default response for a successful webhook trigger is a status code of 200
, with the following body:
If required, you can customise this response.
Follow the steps below to add a new webhook trigger.
Step 1 Click the settings icon associated with the trigger shape in your process flow:
Step 2 Click the add new webhook button:
...a unique Patchworks webhook URL is generated:
Step 3 Copy this URL and paste it into your third-party application.
The documentation for your third-party application should guide you through any required setup for webhooks.
Step 4 If you want to customise the response for your webhook, click the edit icon associated with the URL and make required changes. For more information please see Customising your webhook response.
Step 5 Build the rest of your process flow as needed to handle incoming data from your defined webhook(s).
Step 6 Make sure that your process flow is deployed and enabled - webhooks will not be received if this isn't done.
If required, you can change the default response for your webhook by selecting the 'edit' icon associated with the URL - for example:
Here, you'll find options to select an alternative status code and specify new body text:
Here you can:
Use the status code dropdown field to select the required response code.
Enter the required text in the body field.
Select the required format for your body content (choose from JSON, XML or Plain text).
Event connectors can trigger process flows by listening for events that are published to message queues/topics by a message broker (e.g. RabbitMQ).
Once an event connector is configured, it becomes available for use as a process flow trigger.
New event connectors are available for selection in process flow trigger shapes as soon as they are saved successfully.
ALL messages published to selected queues/topics are passed through to the process flow.
Follow the steps below to add an event connector as a process flow trigger.
Step 1 Click the settings icon associated with the trigger shape in your process flow:
Step 2 Click the add new event listener button:
...event options are displayed:
Step 3 Select a broker from the list (all configured event connectors are available for selection):
Step 4 Select a queue from the list - all configured message queues/topics for the selected broker (i.e. event connector) are available for selection:
Step 5 Save the shape settings.
If data is coming into the process flow via a , use these dropdown fields to select appropriate source connector details (i.e. the same instance and endpoint as configured for the previous connector shape). If data is coming into the flow via a non-connector source (such as a , , or ) then leave these fields blank.
If data is coming into the flow via a , this field will be set as required by default. Otherwise, select the entity type associated with the data field(s) that you want to track.
Note that the selection made here has no impact on how the shape performs - it simply determines how the tracked field is categorised in .
If data is received via a , this field will be set as required by default. Otherwise, select the flow direction (send
or receive
) associated with the data field(s) that you want to track:
Note that the selection made here has no impact on how the shape performs - it simply determines how the tracked field is categorised in .
If multiple fields are specified, these values are tracked as one, concatenated value. To track multiple fields separately, use one shape per field.
If data is received via a , you can navigate the associated data structure to select a field for tracking - for example: If data is received via a non-connector source (such as a , , or ), enter a path to the required field manually - for example:
If are defined in the track data shape, the success/failure marker is determined by the outcome of these.
If success criteria filters are NOT defined in the track data shape, the default marker is success
.
If a is defined in the track data shape, it is shown here.
Always run the latest of this process flow. So, if the called process flow is edited and re-deployed at any point, the latest deployed
version will always be called.
Always run the latest of this process flow. So, if the called process flow is edited at any point, the latest edited (draft
) version will always be called.
Action | Flow behaviour |
---|---|
Succeed as partial success
The flow completes and, where possible, data is synced.
Failed payloads for exceptions are removed and are available from run logs.
The flow is logged with a status of partial success
.
Fail flow
The flow completes and, where possible, data is synced.
Failed payloads for exceptions are removed and are available from run logs.
The flow is logged with a status of failure
.
In this case, the flow is marked as a failure
(and will show as such in run logs) but it's important to note that this does not cause the process flow to stop - any valid payloads continue through the flow.
It's likely that you would only use this option instead of succeed as partial success
if logging any failed payloads as a general flow failure is important from a reporting/metrics perspective.
Fail flow & retry
A retry will only happen if the process flow is enabled and deployed, and is NOT triggered manually.
The flow completes and, where possible, data is synced.
Failed payloads for exceptions are removed and are available from run logs.
The flow is logged with a status of retried
.
The flow is retried with ALL data.
The flow is retried ONCE only.
Use this option with care. When the process flow is retried, all data is processed again. If you have any doubt as to whether duplicate records will be handled correctly, we advise using a different action
and manage exceptions separately (for example, add exceptions to a cache and process from there).