Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
You can use the assert payload shape is typically used for testing purposes -you can define a static payload which is used to validate that the current payload (i.e. the payload generated up to the point that the assert payload shape is encountered) is as expected.
To view/update the settings for an existing assert payload shape, click the associated 'cog' icon:
This opens the options panel - for example:
To configure a assert payload shape, all you need to do is paste the required payload and save the shape for example:
An assert payload shape can only be saved if a payload is present. If you add an assert payload shape but don't have the required payload immediately to hand, you can just enter {}
and save.
The add to cache shape is used to cache (i.e. store a copy of) the payload as it stands at that point in the process flow.
During routine platform maintenance, cached data may be cleared. While we make a best effort to retain data for up to 7 days, it could be cleared sooner. Please design your process flows accordingly.
The maximum cache size is 50MB.
Cache names must not include full stop (.) or colon (:) characters.
Cached data is stored in Amazon S3.
To add an add to cache shape to a process flow, follow the steps below.
Step 1 Find the point in your process flow where you want to cache the payload - typically this would be after a 'GET' connection shape, or perhaps after data has been mapped or manipulated via a script.
Step 2 Select the add to cache shape from the shapes palette:
Step 3 Click the create cache option:
...cache options are displayed:
Step 4 Click in the cache level > select cache field to choose when/where this cache will be available:
Choose from the following options:
Step 5 Enter a name for this cache:
The cache name must not include full stop (.) or colon (:) characters.
Step 6 If you have chosen a flow-level or company-level cache, you can set a data retention period to determine when this data will expire - for example:
The data retention period for a flow run-level cache is always 2 hours - this cannot be changed. The maximum retention period for a flow-level or company-level cache is 7 days.
Step 7 Save changes to exit back to add to cache settings where you can continue with your newly created cache.
Step 8 Click in the select a cache field and select your new cache from the list:
Step 9 Enter a cache key to identify this cache object - for example:
Your cache key
can be:
A cache key cannot exceed 128 characters.
Step 11 Set the append option are required. If this option is toggled ON, incoming data is appended to the existing cache key each time an update is made. If this option is toggled OFF, the cache key is overwritten with new data each time.
Step 12 Save changes. The add to cache shape is added to your process flow, displaying the given name and key - for example:
You can as many add to cache shapes as you like in a process flow. For example, you might place want to cache a payload as soon as it gets pulled from a source connection, and again later after it's been transformed. For example:
How long a cached payload remains available depends on the cache level selected when you configured .
When a process flow hits an add to cache shape, all data from the incoming payload is cached. With this in mind, ensure that your incoming data is , and/or as required.
The default behaviour is for the existing cache to be overwritten each time it is updated. Please see the page for information about appending data.
Cache level | Summary |
---|
Cache key | Summary | Example |
---|
If you are adding a company-level cache, you may want to make a note of the key that you specify here, so it can be shared with other users in your organisation who may want to .
Step 10 If the incoming payload is paginated, consider how pages should be handled when cached. When paginated data is pulled from a , a payload is created for each page. If the save all pages option is toggled ON, the payload for each page is saved to its own cache key (with key names generated dynamically from your specified key and page numbers). If the save all pages option is toggled OFF, all pages are saved to a single cache key.
It's important to understand how the save all pages option works in conjunction with the append option. If you aren't sure, please see our page before proceeding.
For more information see our .
Yes. As with any other process flow shape, you can view the associated payload for an add from cache shape after the process flow has run. To do this, click the shape's tick icon and then select the payload tab in the - for example:
If you place an add to cache shape before a shape which generates multiple payloads (typically, a ), you can see each payload that is created via the payload dropdown - for example:
Cached data can be loaded via our load from cache shape. Please refer to the section for more information.
Flow run | The data associated with this cache is only available while the process flow is running. When a process Enabled & deployed process flows
In this case, the Draft/inactive process flows In this case, we use a TTL (Time to Live) with a default of 2 hours to determine when the cached data is deleted. There's no chance that |
Flow | Data in the cache is retained after the process flow is run, so it can be loaded again within this process flow if required.
Cache retention
When you choose to add a |
Company | The data associated with the latest update to this cache is available for use in this process flow and in any other process flows created within your company profile. |
Static | Data is cached to the key exactly as it is specified. Typically used when your aim it to load the entire cache later in the flow (or in other flows). |
|
Dynamic |
|
When an add to cache shape is dropped into a process flow, the entire incoming payload is cached and associated with the given cache key. Depending on the cache type, you can load this cache later in the same flow or in a different flow.
In the simplest scenario, your given cache key would be a static value (e.g. customers
) and you would use this to load the entire cache (containing perhaps tens, hundreds, even thousands of items) where required. But what if you want to load a specific item from a cache, rather than the whole thing?
This is where dynamic cache keys are so useful.
To load data from a cache, you configure a load from cache shape with the required cache
and a single cache key
. All data associated with your given cache key
is loaded.
Consider the example incoming payload below, where four records are cached with a static cache key
with a value of customers
:
If we were to configure a load from cache shape to access the customers
cache key, all four records would be loaded.
So, in order to load specific items from a cache, the incoming data must be added to a cache in such a way that we can easily target individual items. We need an efficient way to take incoming data, batch it into single-record payloads and add each of these to the cache with its own unique, identifying cache key - i.e.:
We can achieve this as follows:
Flow control is an easy way to batch incoming data into single-record payloads, however you may prefer an alternative approach. The important point is that the add to cache shape must receive single-record payloads - how you achieve this is up to you.
When you specify a dynamic variable as the cache key, the value for that variable is injected into the key. To prevent the case where large amounts of data are passed into the key, there is a character limit is 128 characters.
Follow the steps below to configure an add to cache shape with a payload variable for generating dynamic cache keys.
These steps assume that you have already defined a flow control shape (or some other means) to ensure that the add to cache shape receives single-record payloads.
Step 1 Drop an add to cache shape into your process flow, where required.
Step 2 In the add to cache shape settings, choose to create cache:
Step 3 Set the cache level and name as required and save changes.
For more information on these fields please see the add to cache shape page.
Step 4 Select the cache that you just created - for example:
Step 5 Move down to the cache key field and enter the required key. Here, you use standard payload variable syntax to define your target data element:
...where schema notation
should be replaced with the notation path to the first occurrence of the required element in the payload which should be used to form the cache key. If required, you can also include a static prefix or suffix. For example:
The output of the payload variable will be used as the cache key.
Our example uses dynamic payload variables however, you can also use metadata variables and/or flow variables. For more information please see dynamics variables section.
Step 6 Save the add to cache shape settings.
Cached data can be loaded via our load from cache shape. Please refer to the Load from cache shape section for more information.
However, it is possible to append data to a cache, so each time the process flow runs and the add to cache shape is reached, the current cache is appended to the existing cache. This works for any cache type (flow, flow run, and company).
Cache size. Theoretically, if a cache is set to append data and then runs on a regular basis indefinitely, the cache size may grow to an unmanageable size. With this in mind, a limit is in place to ensure that a single cache cannot exceed 50MB.
Append data format. Appending cached data is supported for JSON only.
To use the append option, follow the steps below.
Step 3 Enable the append option:
Step 4 A path to append to field is displayed:
Here, you need to consider the structure of the payload that you're passing in and specify a path that ensures that each new payload is appended in the right place.
Step 5 Save the shape. Next time the process flow runs the data will be cached and appended.
If you choose to view the payload for an add to cache shape, the payload will always show data from the lates run - for example:
However, when you add a load from cache shape, the payload will show ALL appended data so far - for example:
When a process flow runs, the payload for received data flows through to subsequent steps. In a straightforward scenario we pull data from one connection, then perhaps apply filters and/or scripts before mapping/transforming data fields and finally pushing the payload into a target connection. This is a very linear example - we start with a payload and it flows all the way through to completion.
For more information please see:
Understanding how pagination options impact what data is cached.
Together, these two options determine how paginated data is cached, so it's important to understand the implications of each.
If you are caching paginated data and toggle the save all pages option to ON, the payload for each page is saved to its own cache key.
The name of this key is generated dynamically, by adding the page number as a suffix to your given cache key for the add to cache shape. Consider the example below:
In this example, our given cache key is called demokey and the save all pages option is toggled ON. So, if received data is paginated into 5 pages, there will be 5 payloads to be cached. These would be named:
demokey.1
demokey.2
demokey.3
demokey.4
demokey.5
Here, .n is the page number suffix. If/how these cache keys can be accessed depends on how the append option is set.
It's important to note that every time a connection shape pulls paginated data, page numbers reset to 1.
When the append option is toggled ON, payloads are appended to the given cache key. How this works depends on the save all pages option:
If the save all pages option is toggled OFF and the append option is toggled OFF, the given (static) cache key is overwritten every time the payload for a page is added to the cache. In this scenario, the cache key will always include data for the latest page from the latest pull.
If the save all pages option is toggled ON and the append option is toggled ON, dynamic cache keys will be created on the first pull, and all subsequent pulls will append paginated payloads to the correlating key (additional cache keys are created for new page numbers, as needed). In this scenario, dynamic cache keys will continue to grow with each data pull as each one will include all payloads received for the correlating page number.
If the save all pages option is toggled OFF and the append option is toggled ON, the payload for each page is appended to the given (static) cache key. In this scenario, your single (static) cache key continues to grow with each data pull - nothing is overwritten.
If the save all pages option is toggled ON and the append option is toggled OFF, dynamic cache keys will be created on the first pull, and all subsequent pulls will overwrite data in the correlating key (additional cache keys are created for new page numbers, as needed). In this scenario, dynamic cache keys will only ever contain the latest data for the correlating page number.
The diagram below illustrates the above:
It's not currently possible to access different versions of a cache. So, each time a process flow runs with the same add to cache shape, the payload for that cache is overwritten/ with the latest data and it's this that will be available to load from a company
cache.
Cache retention
When you choose to add a company
cache, retention options are available so you can decide how long cached data should be retained (you can set a time limit in seconds, minutes, hours, or days).
The default setting is 2 hours. This can be updated to a maximum of 7 days.
The cache key resolves dynamically using variables Typically used when your aim it to load single or multiple items from the cache later in the flow (or in other flows). For more information please see our .
Action | Outcome |
---|---|
We've already noted how the shape can be added to a process flow to cache the entire payload at a given point in the flow. The default behaviour is that when a process flow runs and hits an add to cache shape, any existing data associated with that cache is overwritten with a new payload from the new run.
Paginated data. If your connection shape is pulling paginated data, it's very important to understand how the save all pages option works in conjunction with append. For more information please see our .
Step 1 - create your cache, then select it and add your cache key.
Step 2 ensure that the save all pages option is set as needed. For more information about how this option affects appended data please see our .
However, more complex scenarios might need to use a payload that was generated several steps previously, or even from a different process flow. This is where the and shapes come in.
Wherever you place an add to cache shape shape in a process flow, it will cache (i.e. store a copy of) the payload as it stands at that point in the process flow. You can then use a load from cache shape to reference this payload elsewhere in the same process flow and/or in other process flows for your organisation (depending on how the add to cache shape is ).
When you drop an into a process flow, there are two options that you should consider if your selected endpoint paginates the data that is pulled - these are: save all pages
and append
.
When paginated data is pulled from a , a payload is created for each page - you can see these in the :
For information about setting the append option, please see our page.
cache: customerData cache key: customer-1000000001
cache: customerData cache key: customer-1000000002
cache: customerData cache key: customer-1000000003
cache: customerData cache key: customer-1000000004
Place a flow control shape immediately before the add to cache shape and configure it to create batches of 1 at the appropriate level for your data.
The incoming payload is batched into multiple payloads - one payload per data
element (e.g. one order per payload, one
customer per payload, one product per payload, etc.).
Configure the add to cache shape and specify a payload variable as the cache key, where the variable looks for the first occurrence of a uniquely identifying element in the payload (typically an id or reference number).
The add to cache shape receives and
caches single-record payloads from the flow
control shape. The cache key for each
payload is generated dynamically by
resolving the payload variable from each
incoming payload.
Loading data from a cache is very straightforward using the load from cache shape, however you do need to consider what data you want to load. You can:
Each of these options requires a slightly different approach, as summarised in the diagram below and explained in subsequent sections:
The load from cache shape is used to retrieve a stored payload from an existing cache key (created from an add to cache shape).
You might configure a load from cache shape in the same process flow as the original add to cache step or - if a cache was added and set to company level - you might choose to load it in a different process flow.
To add a load from cache shape to a process flow, follow the steps below.
Step 1 Find the point in your process flow where you want to load the payload from a cache - this could be at the very start of a process flow, or perhaps somewhere further down.
Step 2 Select the load from cache shape from the shapes palette:
Step 3 Click in the select cache field and choose which cache you want to retrieve:
In this list, you'll find any caches that have been added to this process flow (via the add to cache shape), together with any caches that have been added to other process flows and set to a cache level of company.
Step 4 Enter the cache key that you want to retrieve - for example:
Your given cache key might be static or dynamic, depending on how the cache was configured in the corresponding add to cache shape:
For detailed information about each of these approaches, please see What cached data do you want to load?
The cache key must be associated with an existing add to cache shape, either in the same process flow or (in the case of company-level caches) in another process flow.
Step 5 If you want this process flow to fail if for any reason this cache can't be retrieved, tick the fail on cache miss option:
If you leave this option un-ticked, the process flow will continue to run if the cache can't be loaded.
Step 6 If the cache that you're loading was created with the save all pages option toggled ON, you should toggle the load all pages option ON when loading this data:
When paginated data is pulled from a connection shape, a payload is created for each page. If the save all pages option is toggled ON when a cache is created, the payload for each page is saved to its own cache key (with key names generated dynamically from a specified key and page numbers). If the save all pages option is toggled OFF, all pages are saved to a single cache key. For more information please see our Cache pagination options.
Step 7 Save changes. The load from cache shape is added to your process flow, displaying the given name and key - for example:
Yes. As with any other process flow shape, you can view the associated payload for a load from cache shape after the process flow has run. To do this, click the shape's tick icon and then select the payload tab in the run log panel - for example:
This approach is the simplest - all incoming data is cached with a static cache key.
In the example below, all incoming customer records will be added to a cache named ALLcustomers
and a static cache key named customers
:
When the data is cached, it's likely that the cache will include multiple records - for example:
To retrieve this cache, we simply drop a load from cache shape where required in the process flow and specify the same cache and cache key that were defined in the corresponding add to cache shape:
This approach assumes The load from cache shape works as normal to retrieve cached data where the cache was created with a payload variable - you choose the cache name and key to be loaded:
However, the important point to consider is that the cache key that you specify here will have been generated from the payload variable that was specified when the cache was created.
If a payload variable has been used to cache data, you would typically have included a flow control shape to create multiple payloads - for example:
So you will have multiple cache keys that can be loaded. To do this, you can add one load from cache shape for every cache key
that you want to retrieve, specifying the required key in each case. For example:
Alternatively, you can add a single load from cache shape and target specific cache keys by passing in the required ids.
Each of these payloads has its own, unique cache key
(when data was added to the cache, this key was generated dynamically by resolving a cache key
payload variable).
When we come to load this data, we must target the required cache keys. In the same way that we use a payload variable to add data to a cache with dynamic cache keys, we can use a payload variable to load data from these keys.
To do this, you configure a load from cache shape with a 'multi-pick' payload variable in the cache key
, and ensure that data passed into this shape contains the values required to resolve this variable.
...where <element>
should be replaced with whichever data element you will be passing in to to resolve the cache key. For example:
The <element>
defined here will be the same data element that was specified in the payload variable for the corresponding add to cache shape.
To help understand how this approach works, we will step through an example.
Suppose we have the scenario where a process flow has been built to receives incoming orders, and another process flow needs to target specific orders received from this flow.
Here, we will batch an 'orders' payload into single order payloads - then we'll add each payload to its own cache key, which is created dynamically from a payload variable. Let's break these steps down:
Here, we will pass the required order ids into a load from cache shape. These ids are then used to resolve dynamic cache keys (via a payload variable) to determine which orders should be loaded. Let's break these steps down:
Cache key | Summary | Example |
---|---|---|
This approach assumes that the cache to be loaded was , and is comprised of multiple, single-record payloads (having been through a shape).
For more information about this stage, please see .
In summary, you can drop a single shape into a process flow and specify a payload variable as the required cache key
. This must be in the form:
You then need to pass in any <element>
values that should be used to resolve required cache key
names. This might be achieved via a (if values are being generated from another system), or perhaps a shape. Whichever shape you use must be placed immediately before the load from cache shape.
To allow the second process flow access to orders processed by the first, we must add all incoming orders to a company
type cache in the first process flow (remember that company
type caches can be accessed by any other process flow created for your company profile). To ensure that we can go on to target specific orders from this cache later, we will cache every order in its own cache key, using a payload variable.
To retrieve specific orders from the cache created in the first process flow, we will pass the required order ids into a load from cache shape. These ids will be used to resolve dynamic cache keys, using a payload variable.
Static
Data is cached to the key exactly as it is specified. Typically used when your aim it to load the entire cache later in the flow (or in other flows).
orders
Dynamic
The cache key resolves dynamically based on a payload variable. Typically used when your aim it to load single or multiple items from the cache later in the flow (or in other flows). For more information please see our Generating dynamic cache keys with payload variables.
order-[[payload.0.id]]
OR
order-[payload.*.id]]
Step 1: Manual payload
Step 2: Filter
Step 3: Flow control
Step 4: Add to cache
Step 5: Run flow
Step 1: Manual payload
Step 2: Load from cache
Step 3: Run flow
This approach assumes that the cache to be loaded was added with a payload variable for the cache key, and is comprised of multiple, single-record payloads (having been through a flow control shape).
Each of these payloads has its own, unique cache key
(when data was added to the cache, this key was generated dynamically by resolving a cache key
payload variable).
For more information about this stage, please see Generating dynamic cache keys with payload variables.
When we come to load this data, we must target the required cache key. If you only want a single item, the quickest way is to specify the resolved cache key.
The load from cache shape works as normal - you choose the cache
and cache key
to be loaded:
However, the important point to consider is that the cache key
that you specify here will have been generated dynamically by resolving the payload variable that was specified when the cache was added.
Consider the following process flow:
Here, our manual payload contains customer data as below:
To allow us to target specific customer records from this payload, we send it through a flow control shape, which is set to creating one payload for customer:
...so now we have lots of payloads to be cached:
If we look at the payload for the first of these, we can see it contains a single customer record - notice that there's an id
field with a value of 1000000001
. This field uniquely identifies each record.
Next we define an add to cache shape - we create a new cache and use a payload variable to generate a dynamic cache key for each incoming payload:
Here, they payload variable is defined as:
where:
customer-
is static text to prefix the resolved variable.
[[payload.]]
instructs the shape that this variable should be resolved from the incoming payload.
0
denotes that the first occurrence of the following item found in the payload should be used to resolve this variable.
id
is the name of the field in the payload to be used to resolve this variable
So, if we take our first payload above:
...our payload variable would resolve to the following cache key:
This is what we use in our load from cache shape:
Step 1 In your process flow, access settings for the map shape that you want to update:
Step 2 Click the add mapping rule option - for example:
Step 3 Click the add transform button:
Step 4 Click the add transform button:
Step 5 Click in the name field to access a list of all available transform functions, then select cache lookup:
Step 6 Cache reference fields are displayed:
Complete these fields using the table below as a guide:
Step 7 Accept your changes:
...then save the transformation:
Step 8 Now you can select a target field in the usual way. Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the specified cache values will be mapped to the target field.
The steps detailed above show how to configure the cache lookup transform with a known cache key. However, it's possible to populate the cache key automatically, using the output from a previous transform function.
When the key
field is blank, output from the previous transform function for the mapping is applied.
Suppose you have a cache where multiple cache keys have been defined in the form:
itemref
-last_name
For example:
1000021-Smith
Now suppose you want to define a cache lookup transformation which will determine the key by manipulating mapped fields. You would:
Add a mapping row with two source fields - one for itemref
and another for last_name
.
Select itemref
as the target field.
When the process flow runs, output from the concatenate transform function will be applied as the key
for the cache lookup transform function.
All connectors have associated endpoints which determine what entity (orders, products, customers, etc.) is being targeted.
Follow the steps below to configure a connection shape.
Step 1 Click the select a source integration field and choose the instance that you want to use - for example:
Step 2 Select the endpoint that you want to use - for example
All endpoints associated with the parent connector for this instance are available for selection.
Step 3 Depending on how your selected endpoint is configured, you may be required to provide values for one or more variables.
Step 4 Save your changes.
Step 5 Once your selected instance and endpoint settings are saved, go back to edit settings:
Now you can access any optional filter options that are available - for example:
Available filters and variables - and whether or not they are mandatory - will vary, depending on how the connector is configured.
Step 6 The request timeout setting allows you to override the default number of seconds allowed before a request to this endpoint is deemed to have failed - for example:
Step 7 Set error handling options as required. Available options are summarised below:
Step 8 Set the payload wrapping option as appropriate for the data received from the previous step:
This setting determines how the payload that gets pushed should be handled. Available options are summarised below:
Step 9 If your selected endpoint is configured to POST/PUT/PATCH/DELETE data, you can set response handling options:
These options are summarised below:
Step 10 Save your changes.
You can view and manage all existing caches from the data caches page - to access this page, select caches from the dashboard navigation menu.
During routine platform maintenance, cached data may be cleared. While we make a best effort to retain data for up to 7 days, it could be cleared sooner. Please design your process flows accordingly.
The data caches page is split into three sections: flow run caches, flow caches, and company caches:
Each cache is listed with the following details:
If you have a lot of caches, you can search by name:
When you select a cache from the list, an edit cache page is displayed:
From here you can:
To change the name of the cache, simply update the name field in the upper cache details panel, then click the save button.
The cache name must not include full stop (.) or colon (:) characters.
You can use the maximum age slider to change the cache retention period for a cache:
Note that:
The maximum age for a flow run cache is 2 hours - this cannot be changed
The maximum age for a flow or company cache can be changed to anything up to 7 days
The usage panel shows general usage information about the cache:
Here you can see:
The cache contents panel displays an entry for each cache key update. Information shown varies, depending on the cache type.
The following details are displayed for each cache item in a flow run-level cache:
The following details are displayed for each cache item:
To clear all current content in the cache, click the clear cache button:
The contains an 'orders' payload with 17 orders in total.
The ensures that orders are only processed if the id
field is not empty.
The is set to create batches of 1 from the payload root level - so every order will be added to its own payload.
The is defined to add to a company
type cache, named CPT-722. The cache key
is created dynamically, where the first part is always order-followed by the value of the first id element found in the incoming payload. E.g. order-5697116045650
. All data from the incoming payload will be added to this cache key. Taking our example using flow control, the incoming payload will only ever be a single order.
When this process flow runs, checking payload information for the shows that 17 payloads have been cached - one payload for each order.
The contains two order ids that we want to load from our cache.
The is configured to load data from our CPT-722 cache, targeting dynamic cache keys from order-[[payload.*.id]]
. Here, the required cache key(s) will be resolved from all (*
) ids found in the incoming payload - in this case order-5693105439058
and order-5697116045650
.
When this process flow runs, checking payload information for the shows that two payloads have been loaded - one for each of our given ids.
If caches have been or for use in any process flow, you can reference these in field mapping transformations.
Using a , you can look up values from a cache and map them to fields in a target system.
If you've added/updated a before, you'll be used to selecting a source field and a target field. However, when referencing a cache we don't select a source field - the specified cache data is our source.
Field | Summary |
---|
To do this, you add a mapping row in the usual way and define any required transform functions to produce the required value for cache keys. Once this is done, add a cache lookup transform function (as shown ) but leave the key
field blank.
Add a transform function to join itemref
and last_name
fields with a hyphen.
Add a cache lookup transform function as defined , but leave the key
field blank
The example above describes how you might use a transform function as the means to generate a cache key however, the output from any transform function can be used.
The connection shape is used to define which connector should be used for sending or receiving data, and then which endpoint.
Any connector instances that have been for your are available to associate with a connection shape. Any endpoints configured for the underlying connector will be available for selection once you've confirmed which instance you're using.
If you need more information about the relationship between connectors and instances, please see our page.
When you add a connection shape to a process flow, the is displayed immediately, so you can choose which of your connector instances to use, and which endpoint.
To view/update the settings for an existing connection shape, click the associated 'cog' icon to access the - for example:
All connector configured for your company are available for selection. Connectors and their associated instances are added via the .
The default setting is taken from the underlying connector endpoint setup and should only be changed if you have a technical reason for doing so, or if you receive a .
Option | Summary |
---|
Option | Summary |
---|
Option | Summary | Endpoint method |
---|
Item | Summary |
---|
To access for a particular cache, click on its name:
When the name is updated and saved, the change is immediately reflected in any shapes in process flows, where this cache is used.
Item | Summary |
---|
Item | Summary |
---|
Item | Summary |
---|
This removes any existing data but leaves the cache in place so it can still be .
Retries | Sets the number of retries that will be attempted if a connection can't be made. You can define a value between |
Backoff | If you're experiencing connection issues due to rate limiting, it can be useful to increase the You can define a value between |
Allow unsuccessful statuses | If you want the process flow to continue even if the connection response is unsuccessful, toggle this option |
Save response as payload | Set this option to ON to save the response from the completed operation, as a payload which is then sent into the next step of the process flow. | POST PUT PATCH DELETE |
Expect an empty response | Set this option to ON if you are happy for the process flow to continue if no response is received from this this request. This can be useful (for example) if your first connection step needs to POST a message to a system and then go on to another connection step to retrieve data - in this case we don't expect/need a payload from the first connection step for the flow to continue. | POST |
Size | The current size of the cache, shown with a percentage use indicator. The maximum cache size is 50MB |
Created | The date and time that the cache was created. |
Last accessed | The date and time that the cache was last accessed. This timestamp is updated even when no data was added to the cache. |
Keys | The number of keys associated with this cache. |
Cache | Use the dropdown list to select the cache that you want to reference. Available caches will be:
|
Key |
Lookup | You can use dot notation to look up specific elements from the cached payload. If you leave this field blank, the full cached payload is retrieved. |
Default | If required, specify a default value to be used if the cache lookup transform doesn't find a value to return. |
Raw | Push the payload exactly as it is pulled - no modifications are made. |
First | This setting handles cases where your destination system won't process array objects, but your source system sends everything (even single records) as an array. So, When multiple records are pulled, they are written to the payload as an array. If you then strip out a single record to be pushed, that single record will - typically - still be wrapped in an array. Most systems will not accept single records as an array, so we need to 'unwrap' our customer record before it gets pushed. |
Wrapped | This setting handles cases where your destination system is expecting a payload to be wrapped in an array, but your payload contains a series of 'unwrapped' objects. The most likely scenario for this is where you have a complex process flow which is assembling a payload from different routes. Setting payload wrapping to wrapped will wrap the entire payload as an array object. So, |
Name |
Flow | For |
Created | The date and time that the cache was created. |
Last accessed | The date and time that the cache was last accessed by a process flow. The cache may or may not have been updated with data at this time (even if there is no data to be added, the access date/time is logged). |
Keys |
Size | The current size of the cache in proportion to the limit. |
Delete the cache (and all associated data). |
Flow Run ID | The unique id of the process flow run that updated the cache key. |
Started at | The date and time that the process flow run was started. |
Key | The cache key name. |
Page |
Unique key | The internal cache key. |
Size | The size of the cache key. |
Key | The cache key name. |
Page |
Unique key | The internal cache key. |
Size | The size of the cache key. |
Data pools are added and managed via the data pools option in general settings. From here you add a new data pool, or view/update an existing data pool. For more background information on data pools please see our De-dupe shape page.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
De-dupe data pools can be created in two ways:
You can access existing data pools from general settings.
Step 1 Select the settings option from the bottom of the dashboard navigation bar:
Step 2 Select data pools:
...all existing data pools are displayed:
For each data pool you can see the creation date, and the data that it was last updated by a process flow run.
Step 3 To view details for a specific data pool, click the associated name in the list:
...details for the data pool are displayed:
In the top panel you can change the data pool name/description (click the update button to confirm changes) or - if the data pool is not currently in use by a process flow - you can choose to delete it.
In the lower panel you can see all data in the pool. This data is listed with the most recent entries first - the following details are shown:
The Patchworks SFTP connector is used to work with data via files on SFTP servers in process flows. You might work purely in the SFTP environment (for example, copying/moving files between locations), or you might sync data from SFTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an SFTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different SFTP server location
This guide explains the basics of configuring a connection shape with an SFTP connector.
Guidance on this page is for SFTP connections however, they also apply for FTP.
When you install the Patchworks SFTP connector from the Patchworks marketplace and then add an instance, you'll find that two authentication methods are available:
Further information on these authentication methods can be found on our SFTP (prebuilt connector) page.
When you add a connection shape and select an SFTP connector, you will see that two endpoints are available:
Here:
SFTP GET UserPass
is used to retrieve files from the given server (i.e. to receive data)
SFTP PUT UserPass
is used to add/update files on the given server (i.e. to send data)
You may notice that the PUT UserPass
endpoint has a GET
HTTP method - that's because it's not actually used for SFTP. All we're actually doing here is retrieving host information from the connector instance - you'll set the FTP action later in the endpoint configuration, via an ftp command
settings.
Having selected either of the two SFTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
These fields are summarised below:
If you're processing files between SFTP server locations, the {{original_filename}}
variable is used to reference filenames from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
This handles cases where you're taking action with files/data processed by a previous connection shape which is configured to use the SFTP GET UserPass
endpoint and retrieve files matching a regular expression path
.
In this scenario, we can't know the literal name of the file(s) that the SFTP PUT UserPass
endpoint will receive. So, by setting the path
field to {{original_filename}}
, we can refer back to the filename(s) from the previous SFTP connection step.
The {{original_path}}
variable is used to replicate the path from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
This handles cases where you're taking action with files/data processed by a previous connection shape which is configured to use the SFTP GET UserPass
endpoint to retrieve files matching a regular expression path
and you want to replicate the source path in the target location.
The {{current_path}}
variable is used to reference the filename within the current SFTP connection step.
For example, you might want to move existing files to a different SFTP folder. The rename
FTP command is an efficient way to do this - for example:
Here, we're using the FTP rename
command to effectively move files - we're renaming with a different folder location, with current filenames:
rename:store1/completed_orders/{{current_filename}}
A fairly common requirement is to create folders on an SFTP server which are named according to the current date. This can be achieved using a custom script, as summarised below.
The following four lines of code should be added to your script:
Our example is PHP - you should change as needed for your preferred language.
The path in your SFTP connection shape should be set to:
The data
object in the script shape contains three items: payload
, meta
, and variables
.
Our script code creates a timestamp, puts it in to the meta
, and then puts the meta
into the data
.
The SFTP shape always checks if there is an original_filename
key in the meta
and if one exists, this is used.
Much of the information above focuses on scenarios where you are working with files between different SFTP locations. However, another approach is to take the data in files from an SFTP server and sync that data into another Patchworks connector.
When a process flow includes a source connection for an SFTP server (using the SFTP GET UserPass
endpoint) and a non-SFTP target connector (for example, Shopify), data in the retrieved file(s) is used as the incoming payload for the target connector.
If multiple files are retrieved from the SFTP server (because the required path in settings for the SFTP connector is defined as a regular expression which matches more than one file), then each matched file is put through subsequent steps in the process flow one at a time, in turn. So, if you retrieve five files from the source SFTP connection, the process flow will run five times.
For information about working with regular expressions, please see the link below:
The de-dupe shape can be used to handle duplicate records found in incoming payloads. It can be used in three behaviour modes:
Filter. Filters out duplicated data so only new data continues through the flow.
Track. Tracks new data but does not check for duplicated data.
Filter & track. Filters out duplicated data and also tracks new data.
A process flow might include a single de-dupe shape set to one of these modes (e.g. filter & track), or multiple de-dupe shapes at different points in a flow, with different behaviours.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
JSON and XML payloads are supported.
As noted previously, the de-dupe shape can be used in three modes, which are summarised below.
You can have multiple de-dupe shapes (either in the same process flow or in different process flows) sharing the same data pool. Typically, you would create one data pool for each entity type that you are processing. For example, if you are syncing orders via an 'orders' endpoint and products via a 'products' endpoint, you'd create two data pools - one for orders and another for products.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
The key field
is the data field that should be used to match records. This would typically be some sort of id
that uniquely identifies payload records - for example, an order id
if you're processing orders, a customer id
if you're processing customer data, etc.
When duplicate data is identified it is removed from the payload however, exactly what gets removed depends on the configured key field
.
If your given key field is a top-level field for a simple payload, the entire record will be removed. However, if the payload structure is more complex and the key field is within an array, then duplicates will be removed from that array but the parent record will remain.
Let's look at a couple of examples.
The de-dupe shape supports JSON and XML payloads.
Enter the key that was specified in the for the cache that you want to access here. Alternatively, if this transformation is preceded by another transformation function, you can leave this field blank and pick up a value from the output of the previous function. For further information please see the section.
Generally, if your process flow is pulling from a source connection but later pushing just a into a destination connection, you should set payload wrapping to first.
The name that was specified when the cache was created. Caches are created via the .
The number of cache keys associated with this cache. Cache keys are created via the .
If multiple pages are added to a cache (for example, if incoming data is paginated or batched via ) and the save all pages option is togged ON), each page is listed individually.
Click the 'eye' icon to view the content associated with this key. For example:
If multiple pages are added to a cache (for example, if incoming data is paginated or batched via ) and the save all pages option is togged ON), each page is listed individually.
Click the 'eye' icon to view the content associated with this key. For example:
Column | Summary |
---|---|
Auth method | Summary |
---|---|
Option | Summary |
---|---|
The de-dupe shape works with incoming payloads from a , and also from a , or .
The de-dupe shape is configured with a , a , and a :
Mode | Summary |
---|
Data pools are and are used to organise de-dupe data. Once a data pool has been created it becomes available for selection when configuring a de-dupe shape for a process flow.
When data passes through a de-dupe shape which is set for tracked behaviour, the value associated with the for each new record is logged in the data pool. So, the data pool will contain all unique key field values that have passed through the shape.
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a pending
folder:
The regular expression is explained below:
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
Our aim is to copy files retrieved from an FTP location in the first connection step, to a second FTP location, using the same folder structure as the source.
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a store1
folder:
The path is added as a regular expression, explained below:
User pass
The instance is authenticated by providing a username and password for the SFTP server.
Key pass
The instance is authenticated by providing a private key (RSA .pem
format) for the SFTP server.
Filter | Remove duplicate data from the incoming payload so only new data continues through the flow. New data is NOT tracked. |
Track | Log each new key value received in the data pool. |
Filter & track | Remove duplicate data from the incoming payload AND log each new key value received. |
Value
Created by
The name of the process flow where this entry was tracked into the data pool. Click this name to open the associated process flow.
Updated at
The date and time that the record was added to the pool (UTC time).
FTP command
A valid FTP command is expected at the start of this field (e.g. get, put, rename, etc.). If required, qualifying path/filename information can follow a given command. For example:
rename:/orders/store1/processed/{{current_filename}}
Root
This field is only needed if you are specifying a regular expression in the subsequent path
field.
If you are NOT defining the path
field as a regular expression, the root
field isn't important - you can leave it set to /
.
If you are ARE defining the path
field as a regular expression, enter a root path that reflects the expected file location as closely as possible - this will optimise performance for expression matching.
For example, suppose the files that we want to process are in the following SFTP folder:
/orders/store/year/pending
and that our specified path
contains a regular expression to retrieve all files for store 1
for the current day in 2023. In this case our root
would be defined as:
orders/store1/2023/pending
In this way, any regular expression to match for the path
will start in the relevant (2023
)folder - rather than checking folders and subfolders for all stores and all years.
Path
If the name of the file that you want to target is static and known, enter the full path to it here - for example:
store1/2023/pending/20230728.json
If the name is variable and therefore unknown, you can specify a regular expression as the path
. In this case, you enter the required regular expression here, and ensure that the root
field contains a path to the relevant folder (see above).
Original filename
This field is not currently used. For information about working with original filenames please see the Using an {{original_filename}} variable section below.
Original path
This field is not currently used. For information about working with original paths please see the Using an {{original_path}} variable section below.
You can use the manual payload shape to define a static payload to be used for onward processing. For example, you might define an email template that gets pushed into an email connection, or you might want to test a process flow for a connector that's currently being built by your development team.
The maximum number of characters for a single payload is 100k. Anything larger than this may cause the process flow to fail.
To view/update the settings for an existing manual payload shape, click the associated 'cog' icon:
This opens the options panel - for example:
To configure a manual payload shape, all you need to do is paste the required payload and save the shape for example:
A manual payload shape can only be saved if a payload is present. If you add a manual payload shape but don't have the required payload immediately to hand, you can just enter {}
and save.
The flow control shape can be used for cases where you're pulling lots of records from a source connection, but your target connection needs to receive data in small batches. Two common use cases for this shape are:
A target system can only accept items one at a time
A target system has a maximum number of records that can be added/updated at one time
The flow control shape takes all received items, splits them into batches of your given number, and sends these batches into the target connection.
Step 1 In your process flow, add the flow control shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be split originates - for example:
Step 3 Move down to the batch level field and select that data element that you are putting into batches. For example :
The data structure in this dropdown field is pulled from schema associated with the source.
Step 4 In the batch size field, enter the number of items to be included in each batch. For example:
Step 5 Save the shape. Now when you run this process flow, data will be split into batches of your given size.
If you check the payload for the flow control step after it has run, you'll see the payload for the last batch processed.
Mappings are at the heart of Patchworks.
When we pull data from one system and push it into another, it’s unlikely that the two will have a like-for-like data structure. By creating mappings between source and target data fields, the Patchworks engine knows how to transform incoming data as needed to update the target system.
The illustration below helps to visualise this:
In process flows, the map shape is used to define how data fields pulled from one connector correlate with data fields in another connector, and whether any data transformations are required to achieve this.
If your organisation has in-house development expertise and complex transformation requirements, you can use our custom scripts feature to code your own scripts for use with field mappings.
The map shape includes everything that you need to map data fields between two connections in a process flow. When you start to create mappings, there are two approaches to consider:
Having added a map shape to a process flow, click the associated 'cog' icon to access settings:
For more information about working with these settings, please see our Working with field mappings page.
The generate automatic mapping option is used to auto-generate mappings between your selected source and target connections.
All Patchworks prebuilt connectors (i.e. connectors installed from the Patchworks marketplace) adopt a standardised taxonomy for tagging common fields found in data schemas for a range of entity types (customers, orders, refunds, products, fulfillments, etc.). So, if your process flow includes connections to sync data between two prebuilt connectors, it's highly likely that auto-generating mappings will complete a lot of the work for you.
Once auto-generation is complete, mapping rows are added for all fields found in the source data - for example:
Where matching tags are found, the mapping rows will include both source and target fields (you can adjust these manually and/or apply transformations, as needed).
Any fields found in the source data which could not be matched by tag are displayed in partial mapping rows, ready for you to add a target manually.
For more information about using the generate automatic mapping feature, please see our Working with field mappings page.
If your process flow includes custom connections (i.e. connectors that have been built by your organisation, using the Patchworks connector builder), you can still use the generate automatic mapping option. The success of this will depend on whether field tagging was applied to your connector during the build:
If yes, your custom connector will behave like any of our prebuilt connectors when it comes to auto-generated mappings, adding fully mapped rows for all matched tags.
If no, Patchworks won't be able to match any source fields to a target automatically - partial mapping rows are added for all source fields found, ready for you to add a target manually.
It's very easy to add individual mapping rows manually, using the add mapping rule option:
We recommend that you always try the automatic mapping option first and then manually add any extra rows if needed. However, there's no reason that you couldn't add all of your mappings manually if preferred.
For more information about adding mappings manually, please see our Working with field mappings page.
When a connector is built, default filters can be applied at the API level, so when a process flow connection shape pulls data, the payload received is has already been refined.
However, there may be times where you want to apply additional filters to a payload that's been pulled via a connection shape - for example, if the API for a connector does not support particular filters that you need.
The filter shape works with a source payload. As such, it should be placed AFTER a connection shape in process flows.
When specifying a filter value, the maximum number of characters is 1024.
To view/update the settings for an existing filter shape, click the associated 'cog' icon:
This opens the options panel - for example:
Follow the steps below to configure a filter shape.
Step 1 Select a source integration and endpoint to determine where the incoming payload to be filtered originates.
Step 2 Click the add new filter button:
Step 3 Filter settings are displayed:
From here, you can select a field (from the data schema associated with the source endpoint selected in step 1) - for example:
Alternatively, you can toggle the manual input option to ON and add a manual path.
The manual data path field supports metadata variables.
Step 4 Use remaining operator, type and value options to define the required filter.
Presentation of the value field is dependant upon your selected type. For example, if the type field is set to specific date, you can pick a date for the value:
Step 5 Use the keep matching? toggle option to choose how matched records should be treated:
Here:
If the keep matching? option is toggled OFF, matched records are removed from the payload before it moves on down the flow for further processing.
If the keep matching? option is toggled ON, matched records remain in the onward payload, and all non-matching records will be removed.
Step 6 Click the create button to confirm your settings.
Step 7 The filter is added to the filter shape - you can now add more filters if needed:
Don't forget that when a payload is running you can click the tick icon associated with any shape and view the payload at that point in time - this is a great way to check that your filter is refining data as expected.
When defining a filter, you can choose from the following types:
To add a new transformation for a field mapping, you start by adding a new transformation and then build the required functions. To do this, follow the steps below:
Step 1 Access the required process flow and then edit the map shape to be updated with a transform:
Step 2 Find the mapping that you want to update, then click the transform icon (between source and target elements). For our example, we're going to add a prefix to the 'id' field:
Step 3 Click the add transform button:
Step 4 Use the select a function field to choose the type of function that you need to use (functions are organised by type):
Step 5 Depending on the type of function you select, additional fields are displayed for you to complete. Update these as required - for our example we're adding some text to be added as a prefix:
Step 6 Now we need to confirm which source field this transform field should be applied for - click the add field button:
Step 7 Select the required field:
In straightforward scenarios, this will typically be the same source field as defined for the mapping row. However, more complex scenarios may prompt multiple options here - for example, if you apply multiple transforms to the same mapping.
Step 8 Accept your changes.
Step 9 Add more fields if necessary.
Step 10 When you're satisfied that all required fields have been added, accept changes and then save the shape settings.
The same process flow
Other process flows for your company profile
Other process flows for any of your linked company profiles
To export the configuration for a map shape, follow the steps below:
Step 1 Access the required process flow, then click the settings icon for the map shape that you want to export:
Step 2 Click the export map button:
Step 3
The configuration is exported and saved to your default downloads folder. The filename is always map.json
.
To import a mapping configuration into a map shape, follow the steps below:
Step 1 Access the required process flow, then click the settings icon for the map shape that you want to update:
You can import a mapping configuration into a new map shape, or into an existing one. If you import a configuration into an existing map shape, any existing mappings will be overwritten.
Step 2 Click the import map button:
Step 3 Navigate to the downloaded map configuration file on your local drive, then select it.
The default filename for exported map configuration files is map.json
.
Step 4 Having selected a valid mapping configuration file to import, the import completes immediately.
The value of the field that was identified as a match for duplicate records. This is the field defined as the key
to be used for de-dupe shapes - for example:
In this example, the de-dupe ke
y is set to id
, so the value
field shown in the data pool will display id
values.
When specifying a path to a given folder in this way, you don't need a /
at the start or at the end.
Type | Expected value |
---|---|
Field transformations can be defined to change the value of a data field pulled from a source system before it is sent to its target. A transformation is comprised of one or more .
This page explains how to for a field mapping, and .
For a summary of available transform functions please see the section.
For information about adding a field transformation using a cross-reference lookup table, please see our section.
If you're building multiple process flows with similar requirements for field mappings, you can the configuration for a map shape, and then that configuration into another map shape.
When a map shape configuration is exported, a JSON file is generated and saved (automatically) to the default download folder for your browser. All and associated are exported. You can then import this file to any other map shape within:
String
A text string - for example: blue
String length
Number
A number - for example: 2
Specific date
A day, month and year, selected from a date picker.
Dynamic date
Boolean
Null comparison
A field is null
if it has no defined value. Choose whether the give data path is null
or not null
. For example:
Variable
Designed specifically for cases where you are comparing a variable value as the filter comparison.
When selected, a value type
field is displayed and the expected type for your variable can be selected. This ensures that a true comparison can be made. Note that if you are filtering on multiple variables, the value type
field should be set to None
.
This page provides guidance on using the map shape to configure field mappings between two connections.
Step 1 Click the source endpoint option:
...source and target selection fields are displayed:
Step 2 Use source and target selection fields to choose the required connector instance and associated endpoints to be mapped - for example:
Step 3 Click the generate automatic mapping button:
...when prompted to confirm this operation, click generate mapping:
As we're configuring a new map shape, there's no danger that we would overwrite existing mappings. However, always use this option with caution if you're working with an existing map shape - any existing mapping rules are overwritten when you choose to generate automatic mappings.
If you need to access the generate automatic option for an existing mapping shape, you need to click into the source and target details first.
Step 4 Patchworks attempts to apply mappings between your given source and target automatically, based on standardised field tags. A mapping rule is added for each source data field and, where possible, a matched target field - for example:
From here you can refine mappings as needed. You can:
Step 5 Toggle the wrap payload on, if required.
This setting handles cases where your destination system is expecting a payload to be wrapped in an array, but your payload contains a series of 'unwrapped' objects. Typically payload wrapping is defined in the 'pull' connector step but there may be occasions where this option is needed later in the flow, from the map step.
Setting payload wrapping to wrapped will wrap the entire payload as an array object. So, {customer_1},{customer_2}{customer_3}
is pushed as [{customer_1},{customer_2}{customer_3}]
.
Step 6 Save changes.
You can add as many new mapping rules as required to map data between source and target connections.
There may be times where you don't want to (or can't) use the payload fields dropdown select a field from your source/target data schema. In this case, you simply select the manual input field and enter the full schema path for the required field.
You can change the display name and/or the field associated with the source or target for any mapping rule.
If you've used the automatically generate map option to generate an initial set of mappings, you may find that some source fields could not be auto-mapped. In these cases, a mapping rule is added for each un-mapped source field, so you can either add the required destination or delete the rule.
If required, you can map a source field to multiple target fields - for example, you might need to send a customer order number into two (or more) target fields.
Sometimes it can be useful to map multiple source fields to a single target field. For example, you might have a target connection which expects a single field for 'full name', but a source connection with one field for 'first name' and another field for 'surname'.
In this case, you would define mappings for the required source and target fields, then add a transform function to concatenate the two source fields.
When you choose to delete a mapping rule, it's removed from the list immediately. However, the deletion is not permanent until you choose to save the mapping shape.
Available transform functions for use in process flows.
Available transform functions are summarised in the following categories:
A number which represents the expected string length for the received payload. Here, the 'payload' might refer to a targeted field within the incoming payload, or the entire payload.
For example, if you want to ensure that an objectId
field is never empty, you would define a filter for objectId
greater than
0
:
In this case, toggling the keep matching
option to OFF means that the ongoing payload will include only items where this field is not empty. Conversely, toggle this option on if you want to pass on empty payloads for any reason.
You can use the same principle to check for empty payloads (as opposed to a specific field). In this case you would define a filter for *
greater than
0
:
Specify a date/time which is relative to a +/- number of units (seconds, minutes, hours, days, months, years). For example:
A true or false value. For example, if you only want to consider items where an itemRef
field is set to true
, you would define a filter for itemRef
equals
true
:
Any instances defined for your company profile are available to select as the source or target. If you aren't using a connector to retrieve data (for example, you are sending in data via the Inbound API or a webhook), you won't select a source endpoint - instead, use the override source format dropdown field to select the format of your incoming data:
Transform | Description |
---|---|
Transform | Description |
---|---|
Transform | Description |
---|---|
Transform | Description |
---|---|
Apply the date that a process flow runs, with or without adjustments.
Apply a static date.
Convert a date to a predefined or custom format.
Round a date up/down to the start/end of the day.
Time now
Returns the current date and time in your required format.
Timezone
Convert dates to a selected timezone.
Change the source field data type from string
to float
.
Change the source field data type from string
to number
.
Join selected fields with a selected character.
Country code
Apply country codes of a selected type (Alpha 1, Alpha 2, Numeric).
Country name
Return the country name for a country code.
Apply static text or reference variables.
First word
Get the first word from a string.
Hash
Convert a string to a SHA1 Hash.
JSON encode
Encode data into JSON format.
Last word
Get the last word from a string.
Limit
Truncates a string to a given length.
Lowercase
Convert to lowercase.
Pad an existing string of characters to a given length, using a given character.
Prefix
Add a string to the beginning of a field.
Replace any given character(s) with another character.
Substring
Return a given number of characters from a given start.
Suffix
Add a string to the end of a field.
Trim whitespace
Remove any characters around a string.
Uppercase
Convert to uppercase.
URL decode
Convert an encoded URL into a readable format.
URL encode
Convert a string to a URL encoded string.
Change the source field data type from number
to string
.
Ceiling
Round up to the nearest whole number.
Apply a static number.
Floor
Round down to the nearest whole number.
Make negative
Convert number to a negative.
Make positive
Convert number to a positive.
Perform a mathematical operation for selected fields.
Change the number of decimal places.
Change the source field data type from boolean
to string
.
Reference a value from cached data.
Convert weight
Convert a specified weight unit to a given alternative.
Apply a true or false value.
Fallback
Set a default value to be used if the given input is empty. Blank values are supported.
Map
Convert values using a cross-reference lookup.
Convert a source value to null.
Apply a field-level custom script. Note that a script will time out if it runs for more than 120 seconds.
The cast to number transform function is used to change the data type associated with a source field from string
to number
. For example, you might have an id
field in a source system that's stored as number
value, but your destination system expects the id
to be a string
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to number from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The cast to string transform function is used to change the data type associated with a source field from number
to string
. For example, you might have an id
field in a source system that's stored as string
value, but your destination system expects the id
to be a number
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to string from the number category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom boolean transform function is used to map a value of true
or false
to a target field.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom boolean:
Step 5 Move down to the value field and select your required true/false value - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the selected custom boolean value will be mapped to the given target field.
If you've added/updated a before, you'll be used to selecting a source field and a target field. However, when a custom boolean transformation is used we don't select a source field - the custom boolean transformation is our data source.
The custom static date transform function is used to set a target field to a given date and time.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom static date:
Step 5 Click anywhere in the date field, or click the calendar icon, to open a date picker:
Step 6 Set the required date and time.
Step 7 Accept your changes:
...then save the transformation:
Step 8 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 9 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom static date will be mapped to the given target field.
The format transform function is used to change a date value to a different format. For example:
...might be changed to:
A range of predefined date formats is available for selection, or you can set your own custom format.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select format from the date category:
Step 5 Click in the format field to select a predefined date format that incoming dates should be converted to:
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
Internally, the format transform function uses Laravel's date format methods, which in turn call PHP date format methods. Commonly used format specifiers are listed below - full details are available in this Laravel guide.
The following characters are commonly used to specify days in custom format dates.
The following characters are commonly used to specify months in custom format dates.
The following characters are commonly used to specify years in custom format dates.
The following characters are commonly used to specify times in custom format dates.
Unix Epoch dates must be received as a number, not a string - i.e.:
1701734400
rather than "1701734400"
If your Unix dates are provided as strings, you should convert these to numbers. To achieve this, add a cast to number transform for the date field BEFORE the date format transform function.
Specifier | Summary |
---|---|
Specifier | Summary |
---|---|
Specifier | Summary |
---|---|
Specifier | Summary |
---|---|
d
Day of the month with leading zeros (01 to 31).
j
Day of the month without leading zeros (1 to 31).
D
A textual representation of a day in three letters (Mon to Sun)
l
A full textual representation of the day of the week (Monday to Sunday)
m
Numeric representation of a month with leading zeros (01 to 12).
n
Numeric representation of a month without leading zeros (1 to 12).
M
A textual representation of a month in three letters (Jan to Dec)
F
A full textual representation of a month (January to December).
Y
Four-digit representation of the year (e.g. 2023).
y
Two-digit representation of the year (e.g. 23).
H
Hour in 24-hour format with leading zeros (00 to 23).
i
Minutes with leading zeros (00 to 59).
s
Seconds with leading zeros (00 to 59).
a
Lowercase Ante meridiem (am) or Post meridiem (pm).
A
Uppercase Ante meridiem (AM) or Post meridiem (PM).
The round number transform function is used to change the number of decimal places for a number value. For example:
...might be changed to:
With the round number transform you can specify the number of decimal places that should be applied to incoming numeric values.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select round number from the number category:
Step 5 Move to the decimal places field and enter the required number of decimal places required for transformed values - for example:
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The script transform function is used to apply an existing custom script to the source value, and the updated field value is pushed to the target field.
Make sure that the required script has been created and tested before applying it as a transform function.
Any payloads passed in and out of a script transform are verbatim - there is no JSON encoding.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select script:
Step 5 Click in the script version field and select the script/version that you want to apply for this field transformation - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select/update the target field and then the mapping in the usual way.
The route shape is used for cases where a process flow needs to split into multiple paths, based on a given set of conditions. Conditions are defined based on any fields found in the schema associated with your source data, so the scope for using routes is huge.
To define multiple routes for your process flow you must:
Add a route shape.
Configure the route shape to add required routes and conditions.
Build the flow for each configured route by add shapes in the usual way.
By default, multiple routes are processed in parallel when a process flow runs.
When you add a route shape to a process flow, the shape is added to your canvas with two placeholder route stems - for example:
To configure these routes (and add more if needed) click the 'cog' icon associated with this shape to access route settings.
Follow the steps below to configure route data for a route shape.
Step 1 Select a source integration and endpoint to determine where the incoming payload for the route shape is coming from - for example:
Step 2 Select a routing method to determine what should happen if a payload record matches conditions defined for more than one route:
These options are summarised below:
Step 3 Click the 'edit' icon associated with the first route:
Step 4 Enter your required name for this route - it's a good idea to ensure this provides an indication of the route's purpose. For example:
Step 5 Click the add new filter button:
Step 6 Filter settings are displayed:
From here, you can select a field (from the data schema associated with the source endpoint selected in step 1) - for example:
Alternatively, you can toggle the manual input option to ON and add syntax for dynamic variables:
The manual data path field supports metadata variables.
Step 7 Use remaining operator, type and value options to define the required filter.
Step 8 Use the keep matching? toggle option to choose how matched records should be treated:
Here:
If the keep matching? option is toggled OFF, matched records are removed from the payload before it moves on down the flow for further processing.
If the keep matching? option is toggled ON, matched records remain in the onward payload, and all non-matching records will be removed.
Step 9 Click the save button (at the bottom of the panel) to confirm these settings. The new rule is added for your first route - for example:
Step 10 Repeat steps 5 to 9 to add any additional rules for this route. When you've added all required rules, click the save/update button at the bottom of the panel.
Step 11 Repeat steps 3 to 9 to configure the second route.
Step 12 Add any additional routes required using the add route button. Each time you add a new route, the canvas updates with an additional route stem from your route shape.
Step 13 Save your changes.
Having defined all required routes and associated conditions, the route shape on the canvas will have the corresponding number of route stems, ready for you to add shapes. For example:
Click the + sign for each branch in turn, then add the required shapes for each route flow.
The run flow shape is used to call one process flow from another, so you can run process flows in a chain. For example, you might have a process flow that receives data from a webhook, applies filters and then hits a run flow shape to call another flow with that data.
Default behaviour is for the payload from the end of the calling process flow to be sent into the called process flow for onward processing. However, when configuring a run flow shape you have the option to add a manual payload - in this case, your manual payload will be sent into the called process flow.
The run flow shape also allows you to choose whether any variables associated with the called process flow should be applied.
If you don't configure a manual payload in the run flow shape, the final payload from the calling process flow is always sent into the called process flow.
You cannot create a recursive process flow loop - for example, if Process Flow A calls Process Flow B, you cannot then call Process Flow A from Process Flow B.
Step 1 In your process flow, add the run flow shape in the usual way:
Step 2 Click in the flow field and select the process flow that you want to call/run:
You'll only see enabled & deployed process flows here.
Step 3 If your selected process flow is associated with any process variables, these are shown - you can choose to enable or disable these:
Step 4 If you want to pass a manual payload into this process flow, toggle the specify payload manually option ON and paste the required payload into supplied payload field:
The manual payload can be any format - JSON, XML, plain text, etc.
Step 5 Save the shape.
You can use any version of a script which has been saved and deployed.
Creating a custom script is an advanced feature which requires some in-house development expertise.
Step 1 In your process flow, add the script shape in the usual way:
Step 2 You're prompted to select an existing script:
Step 3 Select the script that you want to use at this point in the process flow:
The list of available scripts only includes scripts which are currently deployed for use.
Step 4 The latest deployed version of the script is added to the shape - for example:
Code is displayed in view-mode. If you need to change the script, save your shape now and then use the left-hand navigation bar to access process flows > custom scripts.
Step 5 Unless you have a specific reason to do otherwise, we advise using the latest version of scripts. However, if you do need to use a previous version of the script, select the 'versions' dropdown field to make your selection - for example:
Step 6 Save the shape:
To view/change the selected script for an existing script shape, click the associated 'cog' icon:
From here, the existing script is displayed - you can either select a different script, or a different version of the existing script:
Remember that the script code can't be changed here. If you need to change the script, save your shape now and then use the left-hand navigation bar to access process flows > custom scripts.
A script will time out if it runs for more than 120 seconds.
Option | Summary |
---|---|
A process flow can only be called from a if it is and .
The version of a process flow is always used.
If you have defined for use in process flows, use the script shape to select a script to apply at a given point in a process flow.
Follow all matching routes
If a record matches defined conditions for multiple routes, send it for onward processing down all matched routes.
Follow first matching route only
If a record matches defined conditions for multiple routes, send it for onward processing down the first matched route, but no more.
The set variables shape is used to set values for flow variables and/or metadata variables at any point in a flow.
When defining variable values you can use:
static text (e.g. blue-003
)
payload syntax (e.g. [[payload.productColour]]
)
flow variable syntax (e.g..{{flow.variables.productColour}}
)
Step 1 In your process flow, add the set variables shape in the usual way:
Step 2 Access shape settings:
Step 3 Click the add new variable option associated with the type of variable that you want to define - for example:
Step 4 Options are displayed for you to define the required variables and their values. How these are displayed depends on the type of variable you've chosen to add:
Step 5 Once variables are accepted they're added to the settings panel (you can edit/delete as needed):
Step 6 Save the shape.
The trigger webhook option can be used if you want to trigger a process flow whenever a given event occurs in your third-party application.
How you use webhooks is driven by your business requirements, and the capabilities of your third-party application. For example, your third-party application might send a webhook which includes a batch of orders to be processed in the body
, or the webhook body
might simply contain a notification message indicating that orders are ready for you to pull.
Patchworks webhook URLs are generated in the form:
For example:
The {{webhook_id}}
is a Patchworks signature which is generated as a random hash (that doesn't expire). This provides built-in authentication for our URLs however, they should still be kept private.
The default response for a successful webhook trigger is a status code of 200
, with the following body:
Follow the steps below to add a new webhook trigger.
Step 1 Click the settings icon associated with the trigger shape in your process flow:
Step 2 Click the add new webhook button:
...a unique Patchworks webhook URL is generated:
Step 3 Copy this URL and paste it into your third-party application.
The documentation for your third-party application should guide you through any required setup for webhooks.
Step 5 Build the rest of your process flow as needed to handle incoming data from your defined webhook(s).
If required, you can change the default response for your webhook by selecting the 'edit' icon associated with the URL - for example:
Here, you'll find options to select an alternative status code and specify new body text:
Here you can:
Use the status code dropdown field to select the required response code.
Enter the required text in the body field.
Select the required format for your body content (choose from JSON, XML or Plain text).
The split shape is used to split-out a given element of a payload at a given data element. When you split data, the specified element (including any nested elements) is extracted for onward processing.
Step 1 In your process flow, add the split shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be split originates:
Step 3 Move down to the level to split section and use the dropdown data path to select the required data element to split - for example:
Remember - any data (including nested data) within the selected element will be split out into a new payload.
Step 4 If required, you can add a wrapper key. This wraps the entire payload in an element of the given name - for example:
...would wrap the payload as shown below:
Step 5 Save the shape.
When you choose to add a webhook to a process flow , a is auto-generated. This URL must be added to your third-party application, so it knows where to send event data.
For webhooks to be received, a process flow must be and .
If required, you can .
Step 4 If you want to customise the response for your webhook, click the edit icon associated with the URL and make required changes. For more information please see .
Step 6 Make sure that your process flow is and - webhooks will not be received if this isn't done.
For example, your process flow might be pulling customer data from a source connection, but you need to send address details to a different endpoint. In this case, you'd use the to create two different routes, mapping just customer data down one, and splitting out addresses for the other.
All process flows must begin with a trigger - it’s this that determines when and how frequently the process flow will run. For this reason, all new process flows are created with a trigger shape already in place. You should edit this shape to apply your required settings:
Having accessed settings for a trigger shape, you can select the required trigger type - this determines any subsequent options that are displayed:
For example, you might choose to track all customer_id
values that pass through the shape, so at any time you can quickly check if a given customer record has been processed:
Data that is tracked via the track data shape is retained for one year.
JSON payloads are supported.
To add and configure a new track shape, follow the steps below.
Step 1 In your process flow, add the track data shape in the usual way:
Step 2 You can now configure the track data shape. How you do this depends on whether the data to be tracked is being added to the process flow via a connector shape, or from a non-connector source (such as a manual payload, inbound API request, or webhook):
Trigger schedule options are used to schedule the associated process flow to run at a specified frequency and/or time. Here, you can use intuitive selection options to define your requirements or - if you are familiar with regular expressions - use advanced options to build your own expression.
Schedules can be defined based on the following occurrences:
Define a schedule to run every x
minutes - for example:
Define a schedule to run every x
hours - for example:
Define a schedule to run on selected days of the week at a given start time - for example:
Use the every dropdown list for quick daily presets, or define custom settings:
Define a schedule to run on selected days of the month or weeks, for selected months, at a given start time - for example:
Use the every dropdown list for quick monthly presets, or define custom settings:
If you are familiar with cron expressions, you can select this option to activate the cron expression field beneath and enter your required expression directly:
Patchworks supports the 'standard' cron expression format, consisting of five fields: minute, hour, day of the month, month, and day of the week.
Each field is represented by a number or an asterisk (*) to indicate any value. For example:
0 5 * * *
would run at 5 am, every day of the week.
Extended cron expressions (where six characters can be used to include seconds or seven characters to include seconds and year) are not supported.
Follow the steps below to add a new trigger schedule.
Step 1 To add a new schedule, click the add new schedule button:
Step 3 Define your required settings for the occurrence.
Step 4 Click save to save this schedule. The schedule is added to the shape - for example:
You can add a single schedule, or multiple schedules. When you add multiple schedules, ALL of them will be active.
The track data shape can be used to track processed data, based on field paths that you define. When data passes through a track data shape, values associated with your defined field paths are tracked, which means they are added to the .
The track data shape works with incoming payloads from a , and also from a , or .
and variables are supported when defining .
Trigger schedules are based on .
Step 2 Select an .
Note: If you want to specify or variables for your field path, toggle the manual input option ON and enter your value manually (click the create button to confirm).
If you want to specify or variables for your field path, toggle the manual input option ON and enter your value manually, then click the create button to confirm.
This is preview documentation for a feature that's scheduled in an upcoming release.
Event connectors can trigger process flows by listening for events that are published to message queues/topics by a message broker (e.g. RabbitMQ).
Once an event connector is configured, it becomes available for use as a process flow trigger.
New event connectors are available for selection in process flow trigger shapes as soon as they are saved successfully.
ALL messages published to selected queues/topics are passed through to the process flow.
Follow the steps below to add an event connector as a process flow trigger.
Step 1 Click the settings icon associated with the trigger shape in your process flow:
Step 2 Click the add new event listener button:
...event options are displayed:
Step 3 Select a broker from the list (all configured event connectors are available for selection):
Step 4 Select a queue from the list - all configured message queues/topics for the selected broker (i.e. event connector) are available for selection:
Step 5 Save the shape settings.
The Patchworks FTP connector is used to work with data via files on FTP servers in process flows. You might work purely in the FTP environment (for example, copying/moving files between locations), or you might sync data from FTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an FTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different FTP server location
This guide explains the basics of configuring a connection shape with an FTP connector.
When you add a connection shape and select an FTP connector, you will see that two endpoints are available:
Here:
FTP GET
is used to retrieve files from the given server (i.e. to receive data)
FTP PUT
is used to add/update files on the given server (i.e. to send data)
Having selected either of the two FTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
For information about these fields please see our Configuring SFTP connections page - details are the same.
If required, you can import existing data into a de-dupe pool. For example, you may have records that you know have been processed elsewhere and want to ensure that they aren't processed via Patchworks.
Conversely, you can export de-dupe pool data to a CSV file, for use outside of Patchworks.
De-dupe data exports are completed in CSV format, delimited ONLY with a single comma between fields.
The exported file includes two columns with value
and entity_type_id
headers. For example:
When de-dupe data values are imported:
All records in the import file are added to the data pool as new items
Any existing items in the data pool are unchecked and unchanged
To import de-dupe values, the import file must be in the same format as export files above, with the same headers. I.e.:
Where:
The value
is the key field value that you are matching on
The entity_type_id
is the internal Patchworks id for the entity type associated with the key field that you are using to match duplicates. This id must be present for every entry in your CSV file. You can download a list of ids by following steps detailed later in this page.
Import files cannot exceed 5MB.
To export/download a de-dupe data pool, follow the steps below.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 Click the name of the data pool that you want to export:
Alternatively, you can create a new data pool.
Step 3 With the data pool in edit mode, move to the lower tracked de-dupe data panel and click the download button:
Step 4 The download job is added to a queue and a confirmation message is displayed:
Step 5 When your download is ready, you'll receive an email which includes a link to retrieve the file from the file downloads page. If you can't/don't want to use this link, you can access this page manually - click data pools in the breadcrumb trail at the top of the page:
...followed by the settings element option:
Step 6 Select the file downloads option from the settings page:
Step 7 On the file downloads page, you'll find any exports that have been completed for your company profile in the last hour. Click the download button for your job - the associated CSV file is saved to the default downloads folder for your browser.
This list may include exports from different parts of the dashboard, not just data pools (for example, run log and cross-reference lookup data exports are added here).
Step 8 Click the download button for your job - the associated CSV file is saved to the default downloads folder for your browser.
Download files are cleared after one hour. If you don't manage to download your file within this time, don't worry - just run the export again to create a new one.
If you want to import data into a de-dupe data pool, you need to ensure that each record in your CSV file includes an entity_type_id. To find which id you should use, follow the steps below to download a current list.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 Click the download entity types button at the top of the page:
Step 3 A CSV file is saved to the default downloads folder for your browser.
To import data into a de-dupe data pool, follow the steps below.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 If you want to import data into an existing data pool, click the name of the required data pool from the list:
Alternatively, you can create a new data pool.
Step 3 Move to the lower tracked de-dupe data panel and click the import button:
Step 4 Navigate to the CSV file that you want to import and select it:
Step 5 The file is uploaded and displayed as a button - click this button to complete the import:
Step 6 The import is completed - existing values are updated and new values are added:
You may need to refresh the page to view the updated data pool.
The de-dupe shape is used to identify and then remove duplicate entries from an incoming payload. For more background information please see our De-dupe shape page.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
Currently, the de-dupe shape supports JSON payloads only.
To add and configure a new de-dupe shape, follow the steps below.
Step 1 In your process flow, add the de-dupe shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be de-duped originates - for example:
If your incoming data is via manual payload, inbound API or webhook then you can remove any default source instance and endpoint selections:
Step 3 Move down to the behaviour field and select the required option.
For more information about these options please see our De-dupe shape behaviour section.
Step 4 Move down to the data pool field and select the required data pool.
If necessary, you can create a data pool 'on the fly' using the create data pool option. For more information please see Adding a new data pool via the de-dupe shape.
Step 5 In the key field, select/enter the data field to be used for matching duplicate records. How you do this depends on how the incoming data is being received - please see the options below:
The selection that you make here determines how the payload is adjusted when duplicate data is removed. For more information please see How duplicate data is handled.
Step 5 Select the payload format:
Step 6 Save the shape.
The cast boolean to string transform function is used to change the data type associated with a source field from boolean
to string
.
A boolean
data type can have only two possible states: true
or false
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast boolean to string from the other category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The cast to float transform function is used to change the data type associated with a source field from string
to float
.
A float is a type of number which uses a floating point to represent a decimal or fractional value. They are typically used for very large or very small values where there are a lot of numbers after the point - for example: 5.3333 or 0.0001.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to float from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom dynamic date transform function is used to set a target field to the current date and time, based on the date and time that the process flow runs. You can also define rounding and adjustments.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom dynamic date:
Step 5 Optionally, you can add adjustment settings - for example:
These options are summarised below:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom dynamic date will be mapped to the given target field.
Field | Summary | Example |
---|---|---|
Round
Select start of day
to change the time to 00:00:00
for the date the process flow is run.
Select end of day
to change the time to 23:59:59
for the date the process flow is run .
Suppose that the process flow runs at the following date/time: 2023-08-10 10:30:0
If set to start of day
, the transformed value would be 2023-08-10 00:00:00
.
If set to end of day
, the transformed value would be 2023-08-10 23:59:59
.
Units
If you want to adjust the date/time, select the required unit - choose from second
, minute
, hour
, day
, week
, month
or year
.
Suppose that the process flow runs at the following date/time: 2023-08-10 10:30:0 and you want to adjust it by 1 day.
In this case, you would select day
as the unit
and specify 1
as the adjustment
value. However, if you wanted to adjust by 1.5 days, you would set the unit
to hour
and specify 36 as the adjustment
value.
Adjustment
Having selected an adjustment unit, enter the required number of that unit here.
See units
examples above.
The concatenate transform function is used to join the values for two or more source fields (using a given joining character) and then map the output of this transformation to a destination field.
For example, you might have a source system that captures the first name
and last name
for customer records, and then a destination system that expects this information in a single name
field.
In the instructions below, we'll step through the scenario mentioned above where our incoming payload includes the first name
and last name
for customer records, but our destination system expects this information in a single full_name
field. The steps required are detailed in two stages:
To begin, we need to update/add the required mapping row so that it includes all source fields that need to be joined and then pushed to the specified destination.
Step 1 In your process flow, access settings for your map shape:
Step 2
Find (or add) the mapping row which requires a concatenate transformation. In the example below, we have a row that's currently set to map the source first name
field into the destination full name
field:
Step 3 On the source side of the mapping row, we need to add all the fields that need to be joined. To do this, click the 'pencil' icon associated with the existing source field:
Step 4 Details for the selected field are shown - click the add source field button:
Step 5 Click the 'pencil' icon associated with the new source field:
Step 6 Move down and update the display name and payload fields for the second source field that you want to join - for example:
In our example, our source data is coming in via a manual payload so are defining the payload field manually - if you're using a connection shape to receive data, you'll be able to select the required field from the associated schema for your connection.
Step 7 Accept these changes to exit back to your mapping rows - notice that there are now two source fields associated with the row you updated:
Step 8 Repeat steps 3 to 7 to add any more source fields that you need to join.
Step 9 Go to stage 2.
With all required source fields defined for our mapping row, we can add a concatenate transform function to join the values for these fields.
Step 1 Select the add transform button for the required mapping rule - for example:
Step 2 Click the add transform button:
Step 3 Click in the name field and select concatenate from the string section in the list of transform functions:
...concatenate options are displayed:
Step 4 In the join character field, enter the character that you want to join each of your source fields - for example, a hyphen or a space:
Step 5 Click the add field button:
Step 6 Click in source fields and select the first source field to be joined:
All source fields that were added for this mapping in stage 1 will be available for selection here.
Step 7 Accept your changes.
Step 8 Click the add field button again:
...and add the next source field to be joined - for example:
Step 9 Accept your changes.
Step 10 Repeat steps 8 and 9 to add any more source fields to be joined. Each time you accept a new source field you'll see the sequence that they will be processed when this transform function runs - for example:
Fields are joined in the sequence that they are added here.
Step 11 Having added all required source fields to be joined, accept changes:
...then save the function:
Step 12 Ensure that the target field for this mapping row is set as required, then save the map shape. Next time the process flow runs, the given source fields for this mapping row will be joined and then that value is pushed to the target. The example below shows an incoming payload before and after the concatenate transformation is applied:
The custom number transform function is used to map a given number to a target field.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when a custom number transformation is used we don't select a source field - the custom number transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom number:
Step 5 Move down to the custom number field and enter your required number - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom number will be mapped to the given target field.
The math transform function is used to perform a mathematical operation for selected fields. For example, your incoming payload might include customer records, each with a series of numeric value
fields that need to be added together so the total can be pushed to a total
field in the target system.
The following mathematical operations are available:
Add
Subtract
Multiply
Divide
In the instructions below, we'll step through the scenario mentioned above where our incoming payload includes customer records, each with three value fields (value1
, value2
, value3
) that must be added together and pushed to a total
field in the target system.
The steps required are detailed in two stages:
To begin, we need to update/add the required mapping row so that it includes all source fields that need to be added together and then pushed to the target.
Step 1 In your process flow, access settings for your map shape:
Step 2
Find (or add) the mapping row which requires a math transformation. In the example below, we have a row that's currently set to map the source first name
field into the destination full name
field:
Step 3 On the source side of the mapping row, we need to include all the fields to be used in our mathematical operation. To do this, click the 'pencil' icon associated with the existing source field:
Step 4 Details for the selected field are shown - click the add source field button:
Step 5 Click the 'pencil' icon associated with the new source field:
Step 6 Move down and update the display name and payload fields for the second source field that you want to use - for example:
In our example, our source data is coming in via a manual payload so are defining the payload field manually - if you're using a connection shape to receive data, you'll be able to select the required field from the associated schema for your connection.
Step 7 Accept these changes to exit back to your mapping rows - notice that there are now two source fields associated with the row you updated:
Step 8 Repeat steps 3 to 7 to add any more source fields that you need to include in the mathematical operation.
Step 9 Go to stage 2.
With all required source fields defined for our mapping row, we can add a math transform function to define the required calculation based on these fields.
Step 1 Select the add transform button for the required mapping rule - for example:
Step 2 Click the add transform button:
Step 3 Click in the name field and select math from the number section in the list of transform functions:
...math options are displayed:
Step 4 Click in the operator field and select the type of calculation to be performed - you can choose from add, subtract, multiply and divide:
Step 5 Click the add field button:
Step 6 Click in source fields and select the first source field to be used in the calculation:
All source fields that were added for this mapping in stage 1 will be available for selection here.
Step 7 Accept your changes.
Step 8 Click the add field button again:
...and add the next source field to be used - for example:
Step 9 Accept your changes.
Step 10 Repeat steps 8 and 9 to include any more source fields to be used in the calculation. Each time you accept a new source field you'll see the sequence that they will be processed when this transform function runs - for example:
Fields are processed in the sequence that they are added here.
Step 11 Having added all required source fields to be calculated, accept changes:
...then save the function:
Step 12 Ensure that the target field for this mapping row is set as required, then save the map shape. Next time the process flow runs, the mathematical operation will be performed for the given source fields and the total value is pushed to the defined target field. The example below shows an incoming payload before and after the math transformation is applied:
The null value transform function is used to replace the value of a source field with null.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select null value from the other category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom string transform function is used to map a given string to a target field. This string can be static, or you can reference flow variables and cached data.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when a custom string transformation is used we don't select a source field - the custom string transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom string:
Step 5 Move down to the custom string field and enter your required text or variables - for example:
For more information about referencing flow variables in a custom string, please see our Referencing flow variables in field mapping transformations page. For more information about referencing cached data in a custom string, please see our Referencing a cache in mapping transformations page.
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom string (or associated values from variables) will be mapped to the given target field.
The pad transform function is used to pad an existing string of characters to a given length, using a given character. You can apply padding to the left (i.e. immediately before the existing string), to the right (i.e. immediately after the existing string), or both (immediately before and after, as equally as possible).
The payload item below contains a string that's 8 characters long:
If we apply padding to a length of 20
using a *
character to the right
, the result would be:
Here, we have an extra 12 * characters to the right, giving a string length of 20. However, if we apply the *
character to both
, the result would be:
Now the padding is applied with 6 characters to the left of the original string and 6 characters to the right.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select pad from the string section:
Step 5
Click in the direction
field and select where you would like padding to be applied:
You can apply padding to the left
(i.e. immediately before the existing string), to the right
(i.e. immediately after the existing string), or both
(immediately before and after, as equally as possible).
Step 6
In the length
field, specify the number of characters that you'd like the final (i.e. transformed) string to be - for example:
Step 7
In the pad character
field, specify the character that you'd like to use for padding - for example:
If you want padding to be applied with spaces, press the space bar once in this field.
Step 8 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform.
The round date transform function is used to round source dates to either the start or end of the day, where:
start of day
changes the time to 00:00
for the received date
end of day
changes the time to 23:59
for the received date
So, you can round a given source date before sending the rounded value into a given target field.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select round date:
Step 5 Choose your required rounding:
Step 6 Accept your changes and save the transformation - at this point your mapping row is displayed without a target. From here, you can go ahead and add a target field:
The replace transform function is used to replace an existing source string value with either:
An alternative string value
An empty value
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select replace from the string category:
Step 5 Update search and replace fields with your required values:
For the replace field, you can enter another string or leave the field blank to replace the source with an empty value.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example: