Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Process flows are built by dragging and dropping shapes onto a canvas, and then configuring those shapes to work in the way you need to exchange data between connector instances.
Process flows are extremely flexible. You can build something very simple to sync data between two instances with standard field mappings - or build more complex flows, perhaps using custom scripts and/or routing data to different paths based on given conditions.
Before you start building a process flow, make sure that you've installed a connector and added your required instances for any third-party applications that you want to use.
When you add a process flow, you're given a new, blank canvas to start building your data flow using automated shapes.
A process flow can be as simple or as complex as you need, and you can add as many as you like - maybe one flow will fulfil all of your requirements, or maybe you'll need several to achieve different things.
To add a new process flow, follow the steps below.
Step 1 Log in to the Patchworks dashboard, then select process flows > process flows to access the manage process flows page.
Step 2 Click the create new flow button:
Step 3 Enter a name for this flow, then click the next step button.
It's advisable to use a name that reflects the aim of this process flow. Names must be three characters or more.
Step 4 Version 1 of the new process flow is created and displayed on the process flow canvas - for example:
This is a draft version which you can use to build your data flow using automated shapes. Once you're satisfied that your process flow is working as required, you can deploy it for use. For more information about this process please see our Process flow versioning page.
All process flows must begin with a trigger - what event is going to trigger this flow to run? For this reason, all new process flows are created with a trigger shape already in place. You should edit this shape to apply your required settings (for example, to set a schedule).
With our process flow versioning system, you can be sure that a process flow that's currently deployed will never be edited (possibly with breaking changes) while it's in use.
To edit a deployed processed process flow, you take a copy as a draft and work on that - when you're ready, you can then deploy your draft.
Each time that you deploy a new version of a process flow, the previously deployed version is saved as an inactive version for future reference and, if required, future use.
For any process flow, there's always one draft version, one deployed version, and any number of inactive versions.
At any given time, a process flow can be associated with one of the following version types:
There is always one draft version of a process flow. The draft version can be edited freely without any possibility of changing or breaking the version that's currently deployed. With a draft version, you can add/update shapes, and change the process flow name.
Any trigger shape settings defined for a draft version are ignored - draft versions are never triggered to run automatically.
When you're working with a draft version of a process flow, you can take the following actions:
Enable/disable the process flow. If you enable a process flow when viewing a draft version, there's no impact on the draft version. However, the deployed version will start to run automatically as per its trigger shape settings.
Run manually. Use this option to run the draft process flow immediately.
If you choose to run the draft version of a process flow manually, the draft version runs and any target connections will be updated. Where possible, it's always best to use sandbox connections when you're editing and testing draft process flows.
The deployed version of a process flow is the one that's currently in use (if it's enabled) or ready for use (if it's disabled).
The deployed version of a process flow cannot be edited - shapes can't be added/updated, and you can't change the name. The only actions that you can take with a deployed version of a process flow are:
Enable/disable the process flow. Just because a process flow version is deployed, it doesn't necessarily mean that it will be triggered to run automatically as per trigger shape settings. For this to happen, a process flow must be both deployed AND enabled.
Run manually. Use this option to run the process flow immediately.
Copy to draft. When you do this, the process flow remains deployed and an exact copy is taken as the current draft version, ready for you to edit - the existing draft version is discarded. This is a good solution if you've been editing a draft but reached the point where you need to restart from a known sound point.
Each time a draft version of a process flow is deployed, the previously deployed version becomes an inactive version - so you have a full version history for all deployed versions of a process flow.
An inactive version of a process cannot be edited - shapes can't be added/updated, and you can't change the name. The only actions that you can take with an inactive version of a are:
Enable/disable the process flow. If you enable a process flow when viewing an inactive version, there's no impact on the inactive version. However, the deployed version will start to run automatically as per its trigger shape settings.
Run manually. Whilst you can use this option to run the process flow immediately, it's not recommended.
Copy to draft. When you copy an inactive version to draft, an exact copy is taken as the current draft version, ready for you to edit (the existing draft version is discarded). There's no impact on the deployed version.
Deploy. When you deploy an inactive version, it becomes the currently deployed version. The previously deployed version becomes a new inactive version, and the existing draft is not affected.
If you run an inactive version of a process flow manually, the inactive version runs and any target connections will be updated.
You can view all versions of a process flow via the settings panel.
When you access a process flow, the version being viewed is noted in the title bar. If you are viewing a deployed or inactive version, you'll see a message advising that edits cannot be made, and the version number is displayed beneath the title.
The version number is not the same as the version id.
To switch between different versions of a process flow, access the versions list and select the required entry.
Deploying the draft version of a process flow - or deploying an inactive version without editing it as a draft first - is a simple one-click operation from the versions list.
If you want to edit the currently deployed version of a process flow - or an inactive version - you must first copy it to draft. The existing draft version is replaced by the version you copy.
Version | Is set when... | Can be edited? | Transitions |
---|---|---|---|
Draft The process flow is being built.
A new process flow is added
A deployed version is copied to draft
An inactive version is copied to draft
Yes
Deploy
Deployed The process flow is currently in use, or ready for use.
A draft version is deployed
An inactive version is deployed
No
Copy to draft
Inactive The process flow was previously deployed but superseded by a later deployment.
A draft process flow is deployed
An inactive process flow is deployed
No
Copy to draft
Deploy
You can use the assert payload shape is typically used for testing purposes -you can define a static payload which is used to validate that the current payload (i.e. the payload generated up to the point that the assert payload shape is encountered) is as expected.
To view/update the settings for an existing assert payload shape, click the associated 'cog' icon:
This opens the options panel - for example:
To configure a assert payload shape, all you need to do is paste the required payload and save the shape for example:
An assert payload shape can only be saved if a payload is present. If you add an assert payload shape but don't have the required payload immediately to hand, you can just enter {}
and save.
When a process flow runs, the payload for received data flows through to subsequent steps. In a straightforward scenario we pull data from one connection, then perhaps apply filters and/or scripts before mapping/transforming data fields and finally pushing the payload into a target connection. This is a very linear example - we start with a payload and it flows all the way through to completion.
However, more complex scenarios might need to use a payload that was generated several steps previously, or even from a different process flow. This is where the add to cache and load from cache shapes come in.
Wherever you place an add to cache shape shape in a process flow, it will cache (i.e. store a copy of) the payload as it stands at that point in the process flow. You can then use a load from cache shape to reference this payload elsewhere in the same process flow and/or in other process flows for your organisation (depending on how the add to cache shape is configured).
For more information please see:
The process flow canvas is where you build and test your process flows in a smart, visual way. This is where you define if, when, what, and how data is synced.
The process flow canvas has four main elements - a title, an actions bar, a shapes area, and a (hidden unless activated) options panel:
The process flow title bar shows the name of the process flow, as specified when it was created. The number above the title is the process flow id and the number below is the version number. To change this title, use the settings option from the actions bar.
Options in the actions bar are summarised below:
The main 'shapes area' is where you build your process flow. Start by clicking the + sign associated with the trigger shape, then choose the required shape for your next step, and start building!
We're adding new shapes all the time! For information about working with shapes, please see our Process flow shapes section.
When you choose to access process flow settings or to configure a shape in your flow, available settings are displayed in a panel on the right-hand side. For example, when we choose to access settings for a trigger shape, available trigger options are displayed:
Option | Summart |
---|---|
Process flow settings
Use this option to access settings for this process flow - these are used to manage settings for this process flow as a whole. For more information please see Process flow settings.
Return to trigger
If you're working on a longer process flow, use this option to quickly jump back to the start (i.e. back to the trigger shape).
Use this option to run the process flow immediately. For more information please see Running a process flow manually.
If a process flow is running, you can use this option to stop the current run. Stopping a run in this way triggers the flow to stop at its next step however, if an API call or script has already been triggered, the process flow will stop after these have completed. With this in mind, it's important to check any target connections to clarify what (if any) updates have been made after a process flow has been stopped.
As a process flow runs, you can view progress in real time; you can also check logs and view payloads at any stage.
The load from cache shape is used to retrieve a stored payload from an existing cache key (created from an add to cache shape).
You might configure a load from cache shape in the same process flow as the original add to cache step or - if a cache was added and set to company level - you might choose to load it in a different process flow.
To add a load from cache shape to a process flow, follow the steps below.
Step 1 Find the point in your process flow where you want to load the payload from a cache - this could be at the very start of a process flow, or perhaps somewhere further down.
Step 2 Select the load from cache shape from the shapes palette:
Step 3 Click in the select cache field and choose which cache you want to retrieve:
In this list, you'll find any caches that have been added to this process flow (via the add to cache shape), together with any caches that have been added to other process flows and set to a cache level of company.
Step 4 Enter the cache key that you want to retrieve - for example:
Your given cache key might be static or dynamic, depending on how the cache was configured in the corresponding add to cache shape:
For detailed information about each of these approaches, please see What cached data do you want to load?
The cache key must be associated with an existing add to cache shape, either in the same process flow or (in the case of company-level caches) in another process flow.
Step 5 If you want this process flow to fail if for any reason this cache can't be retrieved, tick the fail on cache miss option:
If you leave this option un-ticked, the process flow will continue to run if the cache can't be loaded.
Step 6 If the cache that you're loading was created with the save all pages option toggled ON, you should toggle the load all pages option ON when loading this data:
When paginated data is pulled from a connection shape, a payload is created for each page. If the save all pages option is toggled ON when a cache is created, the payload for each page is saved to its own cache key (with key names generated dynamically from a specified key and page numbers). If the save all pages option is toggled OFF, all pages are saved to a single cache key. For more information please see our Cache pagination options.
Step 7 Save changes. The load from cache shape is added to your process flow, displaying the given name and key - for example:
Yes. As with any other process flow shape, you can view the associated payload for a load from cache shape after the process flow has run. To do this, click the shape's tick icon and then select the payload tab in the run log panel - for example:
The add to cache shape is used to cache (i.e. store a copy of) the payload as it stands at that point in the process flow.
During routine platform maintenance, cached data may be cleared. While we make a best effort to retain data for up to 7 days, it could be cleared sooner. Please design your process flows accordingly.
The maximum cache size is 50MB.
Cache names must not include full stop (.) or colon (:) characters.
To add an add to cache shape to a process flow, follow the steps below.
Step 1 Find the point in your process flow where you want to cache the payload - typically this would be after a 'GET' connection shape, or perhaps after data has been mapped or manipulated via a script.
Step 2 Select the add to cache shape from the shapes palette:
Step 3 Click the create cache option:
...cache options are displayed:
Step 4 Click in the cache level > select cache field to choose when/where this cache will be available:
Choose from the following options:
Step 5 Enter a name for this cache:
The cache name must not include full stop (.) or colon (:) characters.
Step 6 If you have chosen a flow-level or company-level cache, you can set a data retention period to determine when this data will expire - for example:
The data retention period for a flow run-level cache is always 2 hours - this cannot be changed. The maximum retention period for a flow-level or company-level cache is 7 days.
Step 7 Save changes to exit back to add to cache settings where you can continue with your newly created cache.
Step 8 Click in the select a cache field and select your new cache from the list:
Step 9 Enter a cache key to identify this cache object - for example:
Your cache key
can be:
A cache key cannot exceed 128 characters.
Step 11 Set the append option are required. If this option is toggled ON, incoming data is appended to the existing cache key each time an update is made. If this option is toggled OFF, the cache key is overwritten with new data each time.
Step 12 Save changes. The add to cache shape is added to your process flow, displaying the given name and key - for example:
Cache key | Summary | Example |
---|---|---|
You can as many add to cache shapes as you like in a process flow. For example, you might place want to cache a payload as soon as it gets pulled from a source connection, and again later after it's been transformed. For example:
How long a cached payload remains available depends on the cache level selected when you configured .
When a process flow hits an add to cache shape, all data from the incoming payload is cached. With this in mind, ensure that your incoming data is , and/or as required.
The default behaviour is for the existing cache to be overwritten each time it is updated. Please see the page for information about appending data.
Cache level | Summary |
---|
Cache key | Summary | Example |
---|
If you are adding a company-level cache, you may want to make a note of the key that you specify here, so it can be shared with other users in your organisation who may want to .
Step 10 If the incoming payload is paginated, consider how pages should be handled when cached. When paginated data is pulled from a , a payload is created for each page. If the save all pages option is toggled ON, the payload for each page is saved to its own cache key (with key names generated dynamically from your specified key and page numbers). If the save all pages option is toggled OFF, all pages are saved to a single cache key.
It's important to understand how the save all pages option works in conjunction with the append option. If you aren't sure, please see our page before proceeding.
For more information see our .
Yes. As with any other process flow shape, you can view the associated payload for an add from cache shape after the process flow has run. To do this, click the shape's tick icon and then select the payload tab in the - for example:
If you place an add to cache shape before a shape which generates multiple payloads (typically, a ), you can see each payload that is created via the payload dropdown - for example:
Cached data can be loaded via our load from cache shape. Please refer to the section for more information.
Static
Data is cached to the key exactly as it is specified. Typically used when your aim it to load the entire cache later in the flow (or in other flows).
orders
Dynamic
The cache key resolves dynamically based on a payload variable. Typically used when your aim it to load single or multiple items from the cache later in the flow (or in other flows). For more information please see our Generating dynamic cache keys with payload variables.
order-[[payload.0.id]]
OR
order-[payload.*.id]]
Flow run | The data associated with this cache is only available while the process flow is running. When a process Enabled & deployed process flows
In this case, the Draft/inactive process flows In this case, we use a TTL (Time to Live) with a default of 2 hours to determine when the cached data is deleted. There's no chance that |
Flow | Data in the cache is retained after the process flow is run, so it can be loaded again within this process flow if required.
Cache retention
When you choose to add a |
Company | The data associated with the latest update to this cache is available for use in this process flow and in any other process flows created within your company profile. |
Static | Data is cached to the key exactly as it is specified. Typically used when your aim it to load the entire cache later in the flow (or in other flows). |
|
Dynamic |
|
This approach is the simplest - all incoming data is cached with a static cache key.
In the example below, all incoming customer records will be added to a cache named ALLcustomers
and a static cache key named customers
:
When the data is cached, it's likely that the cache will include multiple records - for example:
To retrieve this cache, we simply drop a load from cache shape where required in the process flow and specify the same cache and cache key that were defined in the corresponding add to cache shape:
This approach assumes The load from cache shape works as normal to retrieve cached data where the cache was created with a payload variable - you choose the cache name and key to be loaded:
However, the important point to consider is that the cache key that you specify here will have been generated from the payload variable that was specified when the cache was created.
If a payload variable has been used to cache data, you would typically have included a flow control shape to create multiple payloads - for example:
So you will have multiple cache keys that can be loaded. To do this, you can add one load from cache shape for every cache key
that you want to retrieve, specifying the required key in each case. For example:
Alternatively, you can add a single load from cache shape and target specific cache keys by passing in the required ids.
We've already noted how the add to cache shape can be added to a process flow to cache the entire payload at a given point in the flow. The default behaviour is that when a process flow runs and hits an add to cache shape, any existing data associated with that cache is overwritten with a new payload from the new run.
However, it is possible to append data to a cache, so each time the process flow runs and the add to cache shape is reached, the current cache is appended to the existing cache. This works for any cache type (flow, flow run, and company).
Paginated data. If your connection shape is pulling paginated data, it's very important to understand how the save all pages option works in conjunction with append. For more information please see our cache pagination options page.
Cache size. Theoretically, if a cache is set to append data and then runs on a regular basis indefinitely, the cache size may grow to an unmanageable size. With this in mind, a limit is in place to ensure that a single cache cannot exceed 50MB.
Append data format. Appending cached data is supported for JSON only.
To use the append option, follow the steps below.
Step 1 Drop an add to cache shape into your process flow in the normal way - create your cache, then select it and add your cache key.
Step 2 ensure that the save all pages option is set as needed. For more information about how this option affects appended data please see our cache pagination options page.
Step 3 Enable the append option:
Step 4 A path to append to field is displayed:
Here, you need to consider the structure of the payload that you're passing in and specify a path that ensures that each new payload is appended in the right place.
Step 5 Save the shape. Next time the process flow runs the data will be cached and appended.
If you choose to view the payload for an add to cache shape, the payload will always show data from the lates run - for example:
However, when you add a load from cache shape, the payload will show ALL appended data so far - for example:
Understanding how pagination options impact what data is cached.
When you drop an add to cache shape into a process flow, there are two options that you should consider if your selected endpoint paginates the data that is pulled - these are: save all pages
and append
.
Together, these two options determine how paginated data is cached, so it's important to understand the implications of each.
When paginated data is pulled from a connection shape, a payload is created for each page - you can see these in the run log payload tab:
If you are caching paginated data and toggle the save all pages option to ON, the payload for each page is saved to its own cache key.
The name of this key is generated dynamically, by adding the page number as a suffix to your given cache key for the add to cache shape. Consider the example below:
In this example, our given cache key is called demokey and the save all pages option is toggled ON. So, if received data is paginated into 5 pages, there will be 5 payloads to be cached. These would be named:
demokey.1
demokey.2
demokey.3
demokey.4
demokey.5
Here, .n is the page number suffix. If/how these cache keys can be accessed depends on how the append option is set.
It's important to note that every time a connection shape pulls paginated data, page numbers reset to 1.
When the append option is toggled ON, payloads are appended to the given cache key. How this works depends on the save all pages option:
If the save all pages option is toggled OFF and the append option is toggled OFF, the given (static) cache key is overwritten every time the payload for a page is added to the cache. In this scenario, the cache key will always include data for the latest page from the latest pull.
If the save all pages option is toggled ON and the append option is toggled ON, dynamic cache keys will be created on the first pull, and all subsequent pulls will append paginated payloads to the correlating key (additional cache keys are created for new page numbers, as needed). In this scenario, dynamic cache keys will continue to grow with each data pull as each one will include all payloads received for the correlating page number.
If the save all pages option is toggled OFF and the append option is toggled ON, the payload for each page is appended to the given (static) cache key. In this scenario, your single (static) cache key continues to grow with each data pull - nothing is overwritten.
If the save all pages option is toggled ON and the append option is toggled OFF, dynamic cache keys will be created on the first pull, and all subsequent pulls will overwrite data in the correlating key (additional cache keys are created for new page numbers, as needed). In this scenario, dynamic cache keys will only ever contain the latest data for the correlating page number.
The diagram below illustrates the above:
For information about setting the append option, please see our Appending data to a cache page.
Each of these payloads has its own, unique cache key
(when data was added to the cache, this key was generated dynamically by resolving a cache key
payload variable).
When we come to load this data, we must target the required cache key. If you only want a single item, the quickest way is to specify the resolved cache key.
Consider the following process flow:
Here, our manual payload contains customer data as below:
To allow us to target specific customer records from this payload, we send it through a flow control shape, which is set to creating one payload for customer:
...so now we have lots of payloads to be cached:
If we look at the payload for the first of these, we can see it contains a single customer record - notice that there's an id
field with a value of 1000000001
. This field uniquely identifies each record.
Next we define an add to cache shape - we create a new cache and use a payload variable to generate a dynamic cache key for each incoming payload:
Here, they payload variable is defined as:
where:
customer-
is static text to prefix the resolved variable.
[[payload.]]
instructs the shape that this variable should be resolved from the incoming payload.
0
denotes that the first occurrence of the following item found in the payload should be used to resolve this variable.
id
is the name of the field in the payload to be used to resolve this variable
So, if we take our first payload above:
...our payload variable would resolve to the following cache key:
This is what we use in our load from cache shape:
It's not currently possible to access different versions of a cache. So, each time a process flow runs with the same add to cache shape, the payload for that cache is overwritten/ with the latest data and it's this that will be available to load from a company
cache.
Cache retention
When you choose to add a company
cache, retention options are available so you can decide how long cached data should be retained (you can set a time limit in seconds, minutes, hours, or days).
The default setting is 2 hours. This can be updated to a maximum of 7 days.
The cache key resolves dynamically using variables Typically used when your aim it to load single or multiple items from the cache later in the flow (or in other flows). For more information please see our .
This approach assumes that the cache to be loaded was , and is comprised of multiple, single-record payloads (having been through a shape).
For more information about this stage, please see .
The shape works as normal - you choose the cache
and cache key
to be loaded:
However, the important point to consider is that the cache key
that you specify here will have been generated dynamically by resolving the payload variable that was specified .
This approach assumes that the cache to be loaded was added with a payload variable for the cache key, and is comprised of multiple, single-record payloads (having been through a flow control shape).
Each of these payloads has its own, unique cache key
(when data was added to the cache, this key was generated dynamically by resolving a cache key
payload variable).
For more information about this stage, please see Generating dynamic cache keys with payload variables.
When we come to load this data, we must target the required cache keys. In the same way that we use a payload variable to add data to a cache with dynamic cache keys, we can use a payload variable to load data from these keys.
To do this, you configure a load from cache shape with a 'multi-pick' payload variable in the cache key
, and ensure that data passed into this shape contains the values required to resolve this variable.
In summary, you can drop a single load from cache shape into a process flow and specify a payload variable as the required cache key
. This must be in the form:
...where <element>
should be replaced with whichever data element you will be passing in to to resolve the cache key. For example:
The <element>
defined here will be the same data element that was specified in the payload variable for the corresponding add to cache shape.
You then need to pass in any <element>
values that should be used to resolve required cache key
names. This might be achieved via a connection shape (if values are being generated from another system), or perhaps a manual payload shape. Whichever shape you use must be placed immediately before the load from cache shape.
To help understand how this approach works, we will step through an example.
Suppose we have the scenario where a process flow has been built to receives incoming orders, and another process flow needs to target specific orders received from this flow.
Process flow 1: Add to cache
To allow the second process flow access to orders processed by the first, we must add all incoming orders to a company
type cache in the first process flow (remember that company
type caches can be accessed by any other process flow created for your company profile). To ensure that we can go on to target specific orders from this cache later, we will cache every order in its own cache key, using a payload variable.
Process flow 2: Load from cache To retrieve specific orders from the cache created in the first process flow, we will pass the required order ids into a load from cache shape. These ids will be used to resolve dynamic cache keys, using a payload variable.
Here, we will batch an 'orders' payload into single order payloads - then we'll add each payload to its own cache key, which is created dynamically from a payload variable. Let's break these steps down:
Here, we will pass the required order ids into a load from cache shape. These ids are then used to resolve dynamic cache keys (via a payload variable) to determine which orders should be loaded. Let's break these steps down:
Step 1: Manual payload
The manual payload shape contains an 'orders' payload with 17 orders in total.
Step 2: Filter
The filter shape ensures that orders are only processed if the id
field is not empty.
Step 3: Flow control
The flow control shape is set to create batches of 1 from the payload root level - so every order will be added to its own payload.
Step 4: Add to cache
The add to cache shape is defined to add to a company
type cache, named CPT-722. The cache key
is created dynamically, where the first part is always order-followed by the value of the first id element found in the incoming payload. E.g. order-5697116045650
. All data from the incoming payload will be added to this cache key. Taking our example using flow control, the incoming payload will only ever be a single order.
Step 5: Run flow
When this process flow runs, checking payload information for the add to cache shape shows that 17 payloads have been cached - one payload for each order.
Step 1: Manual payload
The manual payload shape contains two order ids that we want to load from our cache.
Step 2: Load from cache
The load from cache shape is configured to load data from our CPT-722 cache, targeting dynamic cache keys from order-[[payload.*.id]]
. Here, the required cache key(s) will be resolved from all (*
) ids found in the incoming payload - in this case order-5693105439058
and order-5697116045650
.
Step 3: Run flow
When this process flow runs, checking payload information for the load from cache shape shows that two payloads have been loaded - one for each of our given ids.
If caches have been added to your process flow or company-level caches have been added for use in any process flow, you can reference these in field mapping transformations.
Using a cache lookup transformation function, you can look up values from a cache and map them to fields in a target system.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when referencing a cache we don't select a source field - the specified cache data is our source.
Step 1 In your process flow, access settings for the map shape that you want to update:
Step 2 Click the add mapping rule option - for example:
Step 3 Click the add transform button:
Step 4 Click the add transform button:
Step 5 Click in the name field to access a list of all available transform functions, then select cache lookup:
Step 6 Cache reference fields are displayed:
Complete these fields using the table below as a guide:
Step 7 Accept your changes:
...then save the transformation:
Step 8 Now you can select a target field in the usual way. Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the specified cache values will be mapped to the target field.
The steps detailed above show how to configure the cache lookup transform with a known cache key. However, it's possible to populate the cache key automatically, using the output from a previous transform function.
To do this, you add a mapping row in the usual way and define any required transform functions to produce the required value for cache keys. Once this is done, add a cache lookup transform function (as shown above) but leave the key
field blank.
When the key
field is blank, output from the previous transform function for the mapping is applied.
Suppose you have a cache where multiple cache keys have been defined in the form:
itemref
-last_name
For example:
1000021-Smith
Now suppose you want to define a cache lookup transformation which will determine the key by manipulating mapped fields. You would:
Add a mapping row with two source fields - one for itemref
and another for last_name
.
Select itemref
as the target field.
Add a concatenate transform function to join itemref
and last_name
fields with a hyphen.
Add a cache lookup transform function as defined above, but leave the key
field blank
When the process flow runs, output from the concatenate transform function will be applied as the key
for the cache lookup transform function.
The example above describes how you might use a concatenate transform function as the means to generate a cache key however, the output from any transform function can be used.
The connection shape is used to define which connector instance should be used for sending or receiving data, and then which endpoint.
All connectors have associated endpoints which determine what entity (orders, products, customers, etc.) is being targeted.
Any connector instances that have been added for your installed connectors are available to associate with a connection shape. Any endpoints configured for the underlying connector will be available for selection once you've confirmed which instance you're using.
If you need more information about the relationship between connectors and instances, please see our Connectors & instances page.
When you add a connection shape to a process flow, the connection settings panel is displayed immediately, so you can choose which of your connector instances to use, and which endpoint.
To view/update the settings for an existing connection shape, click the associated 'cog' icon to access the settings panel - for example:
Follow the steps below to configure a connection shape.
Step 1 Click the select a source integration field and choose the instance that you want to use - for example:
All connector instances configured for your company are available for selection. Connectors and their associated instances are added via the manage connectors page.
Step 2 Select the endpoint that you want to use - for example
All endpoints associated with the parent connector for this instance are available for selection.
Step 3 Depending on how your selected endpoint is configured, you may be required to provide values for one or more variables.
Step 4 Save your changes.
Step 5 Once your selected instance and endpoint settings are saved, go back to edit settings:
Now you can access any optional filter options that are available - for example:
Available filters and variables - and whether or not they are mandatory - will vary, depending on how the connector is configured.
Step 6 The request timeout setting allows you to override the default number of seconds allowed before a request to this endpoint is deemed to have failed - for example:
The default setting is taken from the underlying connector endpoint setup and should only be changed if you have a technical reason for doing so, or if you receive a timeout error when processing a particularly large payload.
Step 7 Set error handling options as required. Available options are summarised below:
Step 8 Set the payload wrapping option as appropriate for the data received from the previous step:
This setting determines how the payload that gets pushed should be handled. Available options are summarised below:
Step 9 If your selected endpoint is configured to POST/PUT/PATCH/DELETE data, you can set response handling options:
These options are summarised below:
Step 10 Save your changes.
You can view and manage all existing caches from the data caches page - to access this page, select caches from the dashboard navigation menu.
During routine platform maintenance, cached data may be cleared. While we make a best effort to retain data for up to 7 days, it could be cleared sooner. Please design your process flows accordingly.
The data caches page is split into three sections: flow run caches, flow caches, and company caches:
Each cache is listed with the following details:
If you have a lot of caches, you can search by name:
To access cache details for a particular cache, click on its name:
When you select a cache from the list, an edit cache page is displayed:
From here you can:
To change the name of the cache, simply update the name field in the upper cache details panel, then click the save button.
When the name is updated and saved, the change is immediately reflected in any add to cache shapes in process flows, where this cache is used.
The cache name must not include full stop (.) or colon (:) characters.
You can use the maximum age slider to change the cache retention period for a cache:
Note that:
The maximum age for a flow run cache is 2 hours - this cannot be changed
The maximum age for a flow or company cache can be changed to anything up to 7 days
The usage panel shows general usage information about the cache:
Here you can see:
The cache contents panel displays an entry for each cache key update. Information shown varies, depending on the cache type.
The following details are displayed for each cache item in a flow run-level cache:
The following details are displayed for each cache item:
To clear all current content in the cache, click the clear cache button:
This removes any existing data but leaves the cache in place so it can still be used in process flows.
The Patchworks SFTP connector is used to work with data via files on SFTP servers in process flows. You might work purely in the SFTP environment (for example, copying/moving files between locations), or you might sync data from SFTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an SFTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different SFTP server location
This guide explains the basics of configuring a connection shape with an SFTP connector.
Guidance on this page is for SFTP connections however, they also apply for FTP.
When you add a connection shape and select an SFTP connector, you will see that two endpoints are available:
Here:
SFTP GET UserPass
is used to retrieve files from the given server (i.e. to receive data)
SFTP PUT UserPass
is used to add/update files on the given server (i.e. to send data)
You may notice that the PUT UserPass
endpoint has a GET
HTTP method - that's because it's not actually used for SFTP. All we're actually doing here is retrieving host information from the connector instance - you'll set the FTP action later in the endpoint configuration, via an ftp command
settings.
Having selected either of the two SFTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
These fields are summarised below:
In this scenario, we can't know the literal name of the file(s) that the SFTP PUT UserPass
endpoint will receive. So, by setting the path
field to {{original_filename}}
, we can refer back to the filename(s) from the previous SFTP connection step.
The {{original_path}}
variable is used to replicate the path from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
The {{current_path}}
variable is used to reference the filename within the current SFTP connection step.
For example, you might want to move existing files to a different SFTP folder. The rename
FTP command is an efficient way to do this - for example:
Here, we're using the FTP rename
command to effectively move files - we're renaming with a different folder location, with current filenames:
rename:store1/completed_orders/{{current_filename}}
The following four lines of code should be added to your script:
Our example is PHP - you should change as needed for your preferred language.
The path in your SFTP connection shape should be set to:
Much of the information above focuses on scenarios where you are working with files between different SFTP locations. However, another approach is to take the data in files from an SFTP server and sync that data into another Patchworks connector.
When a process flow includes a source connection for an SFTP server (using the SFTP GET UserPass
endpoint) and a non-SFTP target connector (for example, Shopify), data in the retrieved file(s) is used as the incoming payload for the target connector.
If multiple files are retrieved from the SFTP server (because the required path in settings for the SFTP connector is defined as a regular expression which matches more than one file), then each matched file is put through subsequent steps in the process flow one at a time, in turn. So, if you retrieve five files from the source SFTP connection, the process flow will run five times.
For information about working with regular expressions, please see the link below:
Field | Summary |
---|---|
Option | Summary |
---|---|
Option | Summary |
---|---|
Option | Summary | Endpoint method |
---|---|---|
Item | Summary |
---|---|
Item | Summary |
---|---|
Item | Summary |
---|---|
Item | Summary |
---|---|
When you the Patchworks SFTP connector from the and then , you'll find that two authentication methods are available:
Auth method | Summary |
---|
Further information on these authentication methods can be found on our page.
Option | Summary |
---|
If you're processing files between SFTP server locations, the {{original_filename}}
variable is used to reference filenames from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
This handles cases where you're taking action with files/data processed by a previous which is configured to use the SFTP GET UserPass
endpoint and retrieve files matching a regular expression path
.
This handles cases where you're taking action with files/data processed by a previous which is configured to use the SFTP GET UserPass
endpoint to retrieve files matching a regular expression path
and you want to replicate the source path in the target location.
A fairly common requirement is to create folders on an SFTP server which are named according to the current date. This can be achieved using a , as summarised below.
The data
object in the contains three items: payload
, meta
, and variables
.
Our creates a timestamp, puts it in to the meta
, and then puts the meta
into the data
.
The SFTP shape always checks if there is an original_filename
key in the meta
and if one exists, this is used.
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a pending
folder:
The regular expression is explained below:
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
Our aim is to copy files retrieved from an FTP location in the first connection step, to a second FTP location, using the same folder structure as the source.
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a store1
folder:
The path is added as a regular expression, explained below:
Cache
Use the dropdown list to select the cache that you want to reference. Available caches will be:
All flow-level caches added for this process flow
All company-level caches added from any process flow
All flow run-level caches created for this run
Key
Enter the key that was specified in the add to cache shape for the cache that you want to access here. Alternatively, if this transformation is preceded by another transformation function, you can leave this field blank and pick up a value from the output of the previous function. For further information please see the Using output from a transform as the lookup cache key section.
Lookup
You can use dot notation to look up specific elements from the cached payload. If you leave this field blank, the full cached payload is retrieved.
Default
If required, specify a default value to be used if the cache lookup transform doesn't find a value to return.
Retries
Sets the number of retries that will be attempted if a connection can't be made. You can define a value between 0
and 2
. The default setting is 1
.
Backoff
If you're experiencing connection issues due to rate limiting, it can be useful to increase the backoff
value. This sets a delay (in seconds) before a retry is attempted.
You can define a value between 1
and 10
seconds. The default setting is 1
.
Allow unsuccessful statuses
If you want the process flow to continue even if the connection response is unsuccessful, toggle this option on
. The default setting is off
.
Raw
Push the payload exactly as it is pulled - no modifications are made.
First
This setting handles cases where your destination system won't process array objects, but your source system sends everything (even single records) as an array. So, [{customer_1}]
is pushed as {customer_1}
.
Generally, if your process flow is pulling from a source connection but later pushing just a into a destination connection, you should set payload wrapping to first.
When multiple records are pulled, they are written to the payload as an array. If you then strip out a single record to be pushed, that single record will - typically - still be wrapped in an array. Most systems will not accept single records as an array, so we need to 'unwrap' our customer record before it gets pushed.
Wrapped
This setting handles cases where your destination system is expecting a payload to be wrapped in an array, but your payload contains a series of 'unwrapped' objects.
The most likely scenario for this is where you have a complex process flow which is assembling a payload from different routes.
Setting payload wrapping to wrapped will wrap the entire payload as an array object. So, {customer_1},{customer_2}{customer_3}
is pushed as [{customer_1},{customer_2}{customer_3}]
.
Save response as payload
Set this option to ON to save the response from the completed operation, as a payload which is then sent into the next step of the process flow.
POST
PUT
PATCH
DELETE
Expect an empty response
Set this option to ON if you are happy for the process flow to continue if no response is received from this this request.
This can be useful (for example) if your first connection step needs to POST a message to a system and then go on to another connection step to retrieve data - in this case we don't expect/need a payload from the first connection step for the flow to continue.
POST
Size
The current size of the cache, shown with a percentage use indicator. The maximum cache size is 50MB
Created
The date and time that the cache was created.
Last accessed
The date and time that the cache was last accessed. This timestamp is updated even when no data was added to the cache.
Keys
The number of keys associated with this cache.
User pass | The instance is authenticated by providing a username and password for the SFTP server. |
Key pass | The instance is authenticated by providing a private key (RSA |
Name
The name that was specified when the cache was created. Caches are created via the add to cache shape.
Flow
For flow
and flow run
caches, this is the name of the process flow which is using the cache. This information is not shown for company
caches, as the cache might be used in any process flow within a company profile.
Created
The date and time that the cache was created.
Last accessed
The date and time that the cache was last accessed by a process flow. The cache may or may not have been updated with data at this time (even if there is no data to be added, the access date/time is logged).
Keys
The number of cache keys associated with this cache. Cache keys are created via the add to cache shape.
Size
The current size of the cache in proportion to the limit.
Delete the cache (and all associated data).
Flow Run ID
The unique id of the process flow run that updated the cache key.
Started at
The date and time that the process flow run was started.
Key
The cache key name.
Page
If multiple pages are added to a cache (for example, if incoming data is paginated or batched via flow control shape) and the add to cache save all pages option is togged ON), each page is listed individually.
Unique key
The internal cache key.
Size
The size of the cache key.
Key
The cache key name.
Page
If multiple pages are added to a cache (for example, if incoming data is paginated or batched via flow control shape) and the add to cache save all pages option is togged ON), each page is listed individually.
Unique key
The internal cache key.
Size
The size of the cache key.
FTP command | A valid FTP command is expected at the start of this field (e.g. get, put, rename, etc.). If required, qualifying path/filename information can follow a given command. For example:
|
Root | This field is only needed if you are specifying a regular expression in the subsequent |
Path | If the name of the file that you want to target is static and known, enter the full path to it here - for example:
If the name is variable and therefore unknown, you can specify a regular expression as the |
Original filename |
Original path |
The Patchworks FTP connector is used to work with data via files on FTP servers in process flows. You might work purely in the FTP environment (for example, copying/moving files between locations), or you might sync data from FTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an FTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different FTP server location
This guide explains the basics of configuring a connection shape with an FTP connector.
When you add a connection shape and select an FTP connector, you will see that two endpoints are available:
Here:
FTP GET
is used to retrieve files from the given server (i.e. to receive data)
FTP PUT
is used to add/update files on the given server (i.e. to send data)
Having selected either of the two FTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
For information about these fields please see our Configuring SFTP connections page - details are the same.
Data pools are added and managed via the data pools option in general settings. From here you add a new data pool, or view/update an existing data pool. For more background information on data pools please see our De-dupe shape page.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
De-dupe data pools can be created in two ways:
You can access existing data pools from general settings.
Step 1 Select the settings option from the bottom of the dashboard navigation bar:
Step 2 Select data pools:
...all existing data pools are displayed:
For each data pool you can see the creation date, and the data that it was last updated by a process flow run.
Step 3 To view details for a specific data pool, click the associated name in the list:
...details for the data pool are displayed:
In the top panel you can change the data pool name/description (click the update button to confirm changes) or - if the data pool is not currently in use by a process flow - you can choose to delete it.
In the lower panel you can see all data in the pool. This data is listed with the most recent entries first - the following details are shown:
The de-dupe shape can be used to handle duplicate records found in incoming payloads. It can be used in three behaviour modes:
Filter. Filters out duplicated data so only new data continues through the flow.
Track. Tracks new data but does not check for duplicated data.
Filter & track. Filters out duplicated data and also tracks new data.
A process flow might include a single de-dupe shape set to one of these modes (e.g. filter & track), or multiple de-dupe shapes at different points in a flow, with different behaviours.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
The de-dupe shape works with incoming payloads from a connection shape, and also from a manual payload, inbound API or webhook.
JSON and XML payloads are supported.
The de-dupe shape is configured with a behaviour, a data pool, and a key:
As noted previously, the de-dupe shape can be used in three modes, which are summarised below.
Data pools are created in general settings and are used to organise de-dupe data. Once a data pool has been created it becomes available for selection when configuring a de-dupe shape for a process flow.
When data passes through a de-dupe shape which is set for tracked behaviour, the value associated with the key field for each new record is logged in the data pool. So, the data pool will contain all unique key field values that have passed through the shape.
You can have multiple de-dupe shapes (either in the same process flow or in different process flows) sharing the same data pool. Typically, you would create one data pool for each entity type that you are processing. For example, if you are syncing orders via an 'orders' endpoint and products via a 'products' endpoint, you'd create two data pools - one for orders and another for products.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
The key field
is the data field that should be used to match records. This would typically be some sort of id
that uniquely identifies payload records - for example, an order id
if you're processing orders, a customer id
if you're processing customer data, etc.
When duplicate data is identified it is removed from the payload however, exactly what gets removed depends on the configured key field
.
If your given key field is a top-level field for a simple payload, the entire record will be removed. However, if the payload structure is more complex and the key field is within an array, then duplicates will be removed from that array but the parent record will remain.
Let's look at a couple of examples.
The de-dupe shape supports JSON and XML payloads.
If required, you can import existing data into a de-dupe pool. For example, you may have records that you know have been processed elsewhere and want to ensure that they aren't processed via Patchworks.
Conversely, you can export de-dupe pool data to a CSV file, for use outside of Patchworks.
De-dupe data exports are completed in CSV format, delimited ONLY with a single comma between fields.
The exported file includes two columns with value
and entity_type_id
headers. For example:
When de-dupe data values are imported:
All records in the import file are added to the data pool as new items
Any existing items in the data pool are unchecked and unchanged
To import de-dupe values, the import file must be in the same format as export files above, with the same headers. I.e.:
Where:
The value
is the key field value that you are matching on
The entity_type_id
is the internal Patchworks id for the entity type associated with the key field that you are using to match duplicates. This id must be present for every entry in your CSV file. You can download a list of ids by following steps detailed later in this page.
Import files cannot exceed 5MB.
To export/download a de-dupe data pool, follow the steps below.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 Click the name of the data pool that you want to export:
Alternatively, you can create a new data pool.
Step 3 With the data pool in edit mode, move to the lower tracked de-dupe data panel and click the download button:
Step 4 The download job is added to a queue and a confirmation message is displayed:
Step 5 When your download is ready, you'll receive an email which includes a link to retrieve the file from the file downloads page. If you can't/don't want to use this link, you can access this page manually - click data pools in the breadcrumb trail at the top of the page:
...followed by the settings element option:
Step 6 Select the file downloads option from the settings page:
Step 7 On the file downloads page, you'll find any exports that have been completed for your company profile in the last hour. Click the download button for your job - the associated CSV file is saved to the default downloads folder for your browser.
This list may include exports from different parts of the dashboard, not just data pools (for example, run log and cross-reference lookup data exports are added here).
Step 8 Click the download button for your job - the associated CSV file is saved to the default downloads folder for your browser.
Download files are cleared after one hour. If you don't manage to download your file within this time, don't worry - just run the export again to create a new one.
If you want to import data into a de-dupe data pool, you need to ensure that each record in your CSV file includes an entity_type_id. To find which id you should use, follow the steps below to download a current list.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 Click the download entity types button at the top of the page:
Step 3 A CSV file is saved to the default downloads folder for your browser.
To import data into a de-dupe data pool, follow the steps below.
Step 1 Log into the Patchworks dashboard, then select the settings option:
...followed by the file data pools option:
Step 2 If you want to import data into an existing data pool, click the name of the required data pool from the list:
Alternatively, you can create a new data pool.
Step 3 Move to the lower tracked de-dupe data panel and click the import button:
Step 4 Navigate to the CSV file that you want to import and select it:
Step 5 The file is uploaded and displayed as a button - click this button to complete the import:
Step 6 The import is completed - existing values are updated and new values are added:
You may need to refresh the page to view the updated data pool.
The flow control shape can be used for cases where you're pulling lots of records from a source connection, but your target connection needs to receive data in small batches. Two common use cases for this shape are:
A target system can only accept items one at a time
A target system has a maximum number of records that can be added/updated at one time
The flow control shape takes all received items, splits them into batches of your given number, and sends these batches into the target connection.
Step 1 In your process flow, add the flow control shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be split originates - for example:
Step 3 Move down to the batch level field and select that data element that you are putting into batches. For example :
The data structure in this dropdown field is pulled from schema associated with the source.
Step 4 In the batch size field, enter the number of items to be included in each batch. For example:
Step 5 Save the shape. Now when you run this process flow, data will be split into batches of your given size.
If you check the payload for the flow control step after it has run, you'll see the payload for the last batch processed.
Tracked de-dupe data is retained for 90 days after it's added to a data pool.
Currently, the de-dupe shape supports JSON payloads only.
To add and configure a new de-dupe shape, follow the steps below.
Step 1 In your process flow, add the de-dupe shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be de-duped originates - for example:
Step 3 Move down to the behaviour field and select the required option.
Step 5 In the key field, select/enter the data field to be used for matching duplicate records. How you do this depends on how the incoming data is being received - please see the options below:
Step 5 Select the payload format:
Step 6 Save the shape.
When specifying a filter value, the maximum number of characters is 1024.
To view/update the settings for an existing filter shape, click the associated 'cog' icon:
Follow the steps below to configure a filter shape.
Step 1 Select a source integration and endpoint to determine where the incoming payload to be filtered originates.
Step 2 Click the add new filter button:
Step 3 Filter settings are displayed:
From here, you can select a field (from the data schema associated with the source endpoint selected in step 1) - for example:
Alternatively, you can toggle the manual input option to ON and add a manual path.
Step 4 Use remaining operator, type and value options to define the required filter.
Step 5 Use the keep matching? toggle option to choose how matched records should be treated:
Here:
If the keep matching? option is toggled OFF, matched records are removed from the payload before it moves on down the flow for further processing.
If the keep matching? option is toggled ON, matched records remain in the onward payload, and all non-matching records will be removed.
Step 6 Click the create button to confirm your settings.
Step 7 The filter is added to the filter shape - you can now add more filters if needed:
When defining a filter, you can choose from the following types:
Click the 'eye' icon to view the content associated with this key. For example:
Click the 'eye' icon to view the content associated with this key. For example:
When specifying a path to a given folder in this way, you don't need a /
at the start or at the end.
This field is not currently used. For information about working with original filenames please see the section below.
This field is not currently used. For information about working with original paths please see the section below.
Column | Summary |
---|---|
Mode | Summary |
---|---|
The de-dupe shape is used to identify and then remove duplicate entries from an incoming payload. For more background information please see our page.
If your incoming data is via , or then you can remove any default source instance and endpoint selections:
For more information about these options please see our section.
Step 4 Move down to the data pool field and select the required .
If necessary, you can create a data pool 'on the fly' using the create data pool option. For more information please see .
The selection that you make here determines how the payload is adjusted when duplicate data is removed. For more information please see .
When a connector is built, default filters can be applied at the API level, so when a process flow pulls data, the payload received is has already been refined.
However, there may be times where you want to apply additional filters to a payload that's been pulled via a - for example, if the API for a connector does not support particular filters that you need.
The filter shape works with a source payload. As such, it should be placed AFTER a in process flows.
This opens the - for example:
The manual data path field supports .
Presentation of the value field is dependant upon your selected . For example, if the type field is set to specific date, you can pick a date for the value:
When defining a value, you can include , , and variables.
Don't forget that when a payload is running you can - this is a great way to check that your filter is refining data as expected.
Type | Expected value |
---|
If the incoming payload for the de-dupe shape is received from a , or , there is no associated instance/endpoint and therefore no known data schema. In this case, you should enter the required key field
value manually - enter the dot notation path to the required field in your data - for example: *.customerID
:
If the incoming payload for the de-dupe shape is received from a , or , you can generate the key field value dynamically based on given payload, flow, or metadata variables.
Any combination of , and variables can be used to form cache key names. For more information please see our section.
Filter
Remove duplicate data from the incoming payload so only new data continues through the flow. New data is NOT tracked.
Track
Log each new key value received in the data pool.
Filter & track
Remove duplicate data from the incoming payload AND log each new key value received.
Value
Created by
The name of the process flow where this entry was tracked into the data pool. Click this name to open the associated process flow.
Updated at
The date and time that the record was added to the pool (UTC time).
String | A text string - for example: |
String length |
Number | A number - for example: |
Specific date | A day, month and year, selected from a date picker. |
Dynamic date |
Boolean |
Null comparison | A field is |
Variable | Designed specifically for cases where you are comparing a variable value as the filter comparison. When selected, a |
You can use the manual payload shape to define a static payload to be used for onward processing. For example, you might define an email template that gets pushed into an email connection, or you might want to test a process flow for a connector that's currently being built by your development team.
The maximum number of characters for a single payload is 100k. Anything larger than this may cause the process flow to fail.
To view/update the settings for an existing manual payload shape, click the associated 'cog' icon:
This opens the options panel - for example:
To configure a manual payload shape, all you need to do is paste the required payload and save the shape for example:
A manual payload shape can only be saved if a payload is present. If you add a manual payload shape but don't have the required payload immediately to hand, you can just enter {}
and save.
Mappings are at the heart of Patchworks.
When we pull data from one system and push it into another, it’s unlikely that the two will have a like-for-like data structure. By creating mappings between source and target data fields, the Patchworks engine knows how to transform incoming data as needed to update the target system.
The illustration below helps to visualise this:
In process flows, the map shape is used to define how data fields pulled from one connector correlate with data fields in another connector, and whether any data transformations are required to achieve this.
If your organisation has in-house development expertise and complex transformation requirements, you can use our custom scripts feature to code your own scripts for use with field mappings.
The map shape includes everything that you need to map data fields between two connections in a process flow. When you start to create mappings, there are two approaches to consider:
Having added a map shape to a process flow, click the associated 'cog' icon to access settings:
For more information about working with these settings, please see our Working with field mappings page.
The generate automatic mapping option is used to auto-generate mappings between your selected source and target connections.
All Patchworks prebuilt connectors (i.e. connectors installed from the Patchworks marketplace) adopt a standardised taxonomy for tagging common fields found in data schemas for a range of entity types (customers, orders, refunds, products, fulfillments, etc.). So, if your process flow includes connections to sync data between two prebuilt connectors, it's highly likely that auto-generating mappings will complete a lot of the work for you.
Once auto-generation is complete, mapping rows are added for all fields found in the source data - for example:
Where matching tags are found, the mapping rows will include both source and target fields (you can adjust these manually and/or apply transformations, as needed).
Any fields found in the source data which could not be matched by tag are displayed in partial mapping rows, ready for you to add a target manually.
For more information about using the generate automatic mapping feature, please see our Working with field mappings page.
If your process flow includes custom connections (i.e. connectors that have been built by your organisation, using the Patchworks connector builder), you can still use the generate automatic mapping option. The success of this will depend on whether field tagging was applied to your connector during the build:
If yes, your custom connector will behave like any of our prebuilt connectors when it comes to auto-generated mappings, adding fully mapped rows for all matched tags.
If no, Patchworks won't be able to match any source fields to a target automatically - partial mapping rows are added for all source fields found, ready for you to add a target manually.
It's very easy to add individual mapping rows manually, using the add mapping rule option:
We recommend that you always try the automatic mapping option first and then manually add any extra rows if needed. However, there's no reason that you couldn't add all of your mappings manually if preferred.
For more information about adding mappings manually, please see our Working with field mappings page.
The value of the field that was identified as a match for duplicate records. This is the field defined as the key
to be used for de-dupe shapes - for example:
In this example, the de-dupe ke
y is set to id
, so the value
field shown in the data pool will display id
values.
A number which represents the expected string length for the received payload. Here, the 'payload' might refer to a targeted field within the incoming payload, or the entire payload.
For example, if you want to ensure that an objectId
field is never empty, you would define a filter for objectId
greater than
0
:
In this case, toggling the keep matching
option to OFF means that the ongoing payload will include only items where this field is not empty. Conversely, toggle this option on if you want to pass on empty payloads for any reason.
You can use the same principle to check for empty payloads (as opposed to a specific field). In this case you would define a filter for *
greater than
0
:
Specify a date/time which is relative to a +/- number of units (seconds, minutes, hours, days, months, years). For example:
A true or false value. For example, if you only want to consider items where an itemRef
field is set to true
, you would define a filter for itemRef
equals
true
:
This page provides guidance on using the map shape to configure field mappings between two connections.
Step 1 Click the source endpoint option:
...source and target selection fields are displayed:
Step 2 Use source and target selection fields to choose the required connector instance and associated endpoints to be mapped - for example:
Step 3 Click the generate automatic mapping button:
...when prompted to confirm this operation, click generate mapping:
As we're configuring a new map shape, there's no danger that we would overwrite existing mappings. However, always use this option with caution if you're working with an existing map shape - any existing mapping rules are overwritten when you choose to generate automatic mappings.
If you need to access the generate automatic option for an existing mapping shape, you need to click into the source and target details first.
Step 4 Patchworks attempts to apply mappings between your given source and target automatically, based on standardised field tags. A mapping rule is added for each source data field and, where possible, a matched target field - for example:
From here you can refine mappings as needed. You can:
Step 5 Toggle the wrap payload on, if required.
This setting handles cases where your destination system is expecting a payload to be wrapped in an array, but your payload contains a series of 'unwrapped' objects. Typically payload wrapping is defined in the 'pull' connector step but there may be occasions where this option is needed later in the flow, from the map step.
Setting payload wrapping to wrapped will wrap the entire payload as an array object. So, {customer_1},{customer_2}{customer_3}
is pushed as [{customer_1},{customer_2}{customer_3}]
.
Step 6 Save changes.
You can add as many new mapping rules as required to map data between source and target connections.
There may be times where you don't want to (or can't) use the payload fields dropdown select a field from your source/target data schema. In this case, you simply select the manual input field and enter the full schema path for the required field.
You can change the display name and/or the field associated with the source or target for any mapping rule.
If you've used the automatically generate map option to generate an initial set of mappings, you may find that some source fields could not be auto-mapped. In these cases, a mapping rule is added for each un-mapped source field, so you can either add the required destination or delete the rule.
If required, you can map a source field to multiple target fields - for example, you might need to send a customer order number into two (or more) target fields.
Sometimes it can be useful to map multiple source fields to a single target field. For example, you might have a target connection which expects a single field for 'full name', but a source connection with one field for 'first name' and another field for 'surname'.
In this case, you would define mappings for the required source and target fields, then add a transform function to concatenate the two source fields.
When you choose to delete a mapping rule, it's removed from the list immediately. However, the deletion is not permanent until you choose to save the mapping shape.
Any instances defined for your company profile are available to select as the source or target. If you aren't using a connector to retrieve data (for example, you are sending in data via the Inbound API or a webhook), you won't select a source endpoint - instead, use the override source format dropdown field to select the format of your incoming data:
Available transform functions for use in process flows.
Available transform functions are summarised in the following categories:
The cast to float transform function is used to change the data type associated with a source field from string
to float
.
A float is a type of number which uses a floating point to represent a decimal or fractional value. They are typically used for very large or very small values where there are a lot of numbers after the point - for example: 5.3333 or 0.0001.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to float from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
Transform | Description |
---|---|
Transform | Description |
---|---|
Transform | Description |
---|---|
Transform | Description |
---|---|
Apply the date that a process flow runs, with or without adjustments.
Apply a static date.
Convert a date to a predefined or custom format.
Round a date up/down to the start/end of the day.
Time now
Returns the current date and time in your required format.
Timezone
Convert dates to a selected timezone.
Change the source field data type from string
to float
.
Change the source field data type from string
to number
.
Join selected fields with a selected character.
Country code
Apply country codes of a selected type (Alpha 1, Alpha 2, Numeric).
Country name
Return the country name for a country code.
Apply static text or reference variables.
First word
Get the first word from a string.
Hash
Convert a string to a SHA1 Hash.
JSON encode
Encode data into JSON format.
Last word
Get the last word from a string.
Limit
Truncates a string to a given length.
Lowercase
Convert to lowercase.
Pad an existing string of characters to a given length, using a given character.
Prefix
Add a string to the beginning of a field.
Replace any given character(s) with another character.
Substring
Return a given number of characters from a given start.
Suffix
Add a string to the end of a field.
Trim whitespace
Remove any characters around a string.
Uppercase
Convert to uppercase.
URL decode
Convert an encoded URL into a readable format.
URL encode
Convert a string to a URL encoded string.
Change the source field data type from number
to string
.
Ceiling
Round up to the nearest whole number.
Apply a static number.
Floor
Round down to the nearest whole number.
Make negative
Convert number to a negative.
Make positive
Convert number to a positive.
Perform a mathematical operation for selected fields.
Change the number of decimal places.
Change the source field data type from boolean
to string
.
Reference a value from cached data.
Convert weight
Convert a specified weight unit to a given alternative.
Apply a true or false value.
Fallback
Set a default value to be used if the given input is empty. Blank values are supported.
Map
Convert values using a cross-reference lookup.
Convert a source value to null.
Apply a field-level custom script. Note that a script will time out if it runs for more than 120 seconds.
The cast to number transform function is used to change the data type associated with a source field from string
to number
. For example, you might have an id
field in a source system that's stored as number
value, but your destination system expects the id
to be a string
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to number from the string category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom static date transform function is used to set a target field to a given date and time.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom static date:
Step 5 Click anywhere in the date field, or click the calendar icon, to open a date picker:
Step 6 Set the required date and time.
Step 7 Accept your changes:
...then save the transformation:
Step 8 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 9 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom static date will be mapped to the given target field.
The cast to string transform function is used to change the data type associated with a source field from number
to string
. For example, you might have an id
field in a source system that's stored as string
value, but your destination system expects the id
to be a number
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast to string from the number category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The custom dynamic date transform function is used to set a target field to the current date and time, based on the date and time that the process flow runs. You can also define rounding and adjustments.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom dynamic date:
Step 5 Optionally, you can add adjustment settings - for example:
These options are summarised below:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom dynamic date will be mapped to the given target field.
The concatenate transform function is used to join the values for two or more source fields (using a given joining character) and then map the output of this transformation to a destination field.
For example, you might have a source system that captures the first name
and last name
for customer records, and then a destination system that expects this information in a single name
field.
In the instructions below, we'll step through the scenario mentioned above where our incoming payload includes the first name
and last name
for customer records, but our destination system expects this information in a single full_name
field. The steps required are detailed in two stages:
To begin, we need to update/add the required mapping row so that it includes all source fields that need to be joined and then pushed to the specified destination.
Step 1 In your process flow, access settings for your map shape:
Step 2
Find (or add) the mapping row which requires a concatenate transformation. In the example below, we have a row that's currently set to map the source first name
field into the destination full name
field:
Step 3 On the source side of the mapping row, we need to add all the fields that need to be joined. To do this, click the 'pencil' icon associated with the existing source field:
Step 4 Details for the selected field are shown - click the add source field button:
Step 5 Click the 'pencil' icon associated with the new source field:
Step 6 Move down and update the display name and payload fields for the second source field that you want to join - for example:
In our example, our source data is coming in via a manual payload so are defining the payload field manually - if you're using a connection shape to receive data, you'll be able to select the required field from the associated schema for your connection.
Step 7 Accept these changes to exit back to your mapping rows - notice that there are now two source fields associated with the row you updated:
Step 8 Repeat steps 3 to 7 to add any more source fields that you need to join.
Step 9 Go to stage 2.
With all required source fields defined for our mapping row, we can add a concatenate transform function to join the values for these fields.
Step 1 Select the add transform button for the required mapping rule - for example:
Step 2 Click the add transform button:
Step 3 Click in the name field and select concatenate from the string section in the list of transform functions:
...concatenate options are displayed:
Step 4 In the join character field, enter the character that you want to join each of your source fields - for example, a hyphen or a space:
Step 5 Click the add field button:
Step 6 Click in source fields and select the first source field to be joined:
All source fields that were added for this mapping in stage 1 will be available for selection here.
Step 7 Accept your changes.
Step 8 Click the add field button again:
...and add the next source field to be joined - for example:
Step 9 Accept your changes.
Step 10 Repeat steps 8 and 9 to add any more source fields to be joined. Each time you accept a new source field you'll see the sequence that they will be processed when this transform function runs - for example:
Fields are joined in the sequence that they are added here.
Step 11 Having added all required source fields to be joined, accept changes:
...then save the function:
Step 12 Ensure that the target field for this mapping row is set as required, then save the map shape. Next time the process flow runs, the given source fields for this mapping row will be joined and then that value is pushed to the target. The example below shows an incoming payload before and after the concatenate transformation is applied:
The custom number transform function is used to map a given number to a target field.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when a custom number transformation is used we don't select a source field - the custom number transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom number:
Step 5 Move down to the custom number field and enter your required number - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom number will be mapped to the given target field.
The custom boolean transform function is used to map a value of true
or false
to a target field.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom boolean:
Step 5 Move down to the value field and select your required true/false value - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the selected custom boolean value will be mapped to the given target field.
Field | Summary | Example |
---|---|---|
If you've added/updated a before, you'll be used to selecting a source field and a target field. However, when a custom boolean transformation is used we don't select a source field - the custom boolean transformation is our data source.
Round
Select start of day
to change the time to 00:00:00
for the date the process flow is run.
Select end of day
to change the time to 23:59:59
for the date the process flow is run .
Suppose that the process flow runs at the following date/time: 2023-08-10 10:30:0
If set to start of day
, the transformed value would be 2023-08-10 00:00:00
.
If set to end of day
, the transformed value would be 2023-08-10 23:59:59
.
Units
If you want to adjust the date/time, select the required unit - choose from second
, minute
, hour
, day
, week
, month
or year
.
Suppose that the process flow runs at the following date/time: 2023-08-10 10:30:0 and you want to adjust it by 1 day.
In this case, you would select day
as the unit
and specify 1
as the adjustment
value. However, if you wanted to adjust by 1.5 days, you would set the unit
to hour
and specify 36 as the adjustment
value.
Adjustment
Having selected an adjustment unit, enter the required number of that unit here.
See units
examples above.
The format transform function is used to change a date value to a different format. For example:
...might be changed to:
A range of predefined date formats is available for selection, or you can set your own custom format.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select format from the date category:
Step 5 Click in the format field to select a predefined date format that incoming dates should be converted to:
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
Internally, the format transform function uses Laravel's date format methods, which in turn call PHP date format methods. Commonly used format specifiers are listed below - full details are available in this Laravel guide.
The following characters are commonly used to specify days in custom format dates.
The following characters are commonly used to specify months in custom format dates.
The following characters are commonly used to specify years in custom format dates.
The following characters are commonly used to specify times in custom format dates.
Unix Epoch dates must be received as a number, not a string - i.e.:
1701734400
rather than "1701734400"
If your Unix dates are provided as strings, you should convert these to numbers. To achieve this, add a cast to number transform for the date field BEFORE the date format transform function.
Specifier | Summary |
---|---|
Specifier | Summary |
---|---|
Specifier | Summary |
---|---|
Specifier | Summary |
---|---|
d
Day of the month with leading zeros (01 to 31).
j
Day of the month without leading zeros (1 to 31).
D
A textual representation of a day in three letters (Mon to Sun)
l
A full textual representation of the day of the week (Monday to Sunday)
m
Numeric representation of a month with leading zeros (01 to 12).
n
Numeric representation of a month without leading zeros (1 to 12).
M
A textual representation of a month in three letters (Jan to Dec)
F
A full textual representation of a month (January to December).
Y
Four-digit representation of the year (e.g. 2023).
y
Two-digit representation of the year (e.g. 23).
H
Hour in 24-hour format with leading zeros (00 to 23).
i
Minutes with leading zeros (00 to 59).
s
Seconds with leading zeros (00 to 59).
a
Lowercase Ante meridiem (am) or Post meridiem (pm).
A
Uppercase Ante meridiem (AM) or Post meridiem (PM).
The null value transform function is used to replace the value of a source field with null.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select null value from the other category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The math transform function is used to perform a mathematical operation for selected fields. For example, your incoming payload might include customer records, each with a series of numeric value
fields that need to be added together so the total can be pushed to a total
field in the target system.
The following mathematical operations are available:
Add
Subtract
Multiply
Divide
In the instructions below, we'll step through the scenario mentioned above where our incoming payload includes customer records, each with three value fields (value1
, value2
, value3
) that must be added together and pushed to a total
field in the target system.
The steps required are detailed in two stages:
To begin, we need to update/add the required mapping row so that it includes all source fields that need to be added together and then pushed to the target.
Step 1 In your process flow, access settings for your map shape:
Step 2
Find (or add) the mapping row which requires a math transformation. In the example below, we have a row that's currently set to map the source first name
field into the destination full name
field:
Step 3 On the source side of the mapping row, we need to include all the fields to be used in our mathematical operation. To do this, click the 'pencil' icon associated with the existing source field:
Step 4 Details for the selected field are shown - click the add source field button:
Step 5 Click the 'pencil' icon associated with the new source field:
Step 6 Move down and update the display name and payload fields for the second source field that you want to use - for example:
In our example, our source data is coming in via a manual payload so are defining the payload field manually - if you're using a connection shape to receive data, you'll be able to select the required field from the associated schema for your connection.
Step 7 Accept these changes to exit back to your mapping rows - notice that there are now two source fields associated with the row you updated:
Step 8 Repeat steps 3 to 7 to add any more source fields that you need to include in the mathematical operation.
Step 9 Go to stage 2.
With all required source fields defined for our mapping row, we can add a math transform function to define the required calculation based on these fields.
Step 1 Select the add transform button for the required mapping rule - for example:
Step 2 Click the add transform button:
Step 3 Click in the name field and select math from the number section in the list of transform functions:
...math options are displayed:
Step 4 Click in the operator field and select the type of calculation to be performed - you can choose from add, subtract, multiply and divide:
Step 5 Click the add field button:
Step 6 Click in source fields and select the first source field to be used in the calculation:
All source fields that were added for this mapping in stage 1 will be available for selection here.
Step 7 Accept your changes.
Step 8 Click the add field button again:
...and add the next source field to be used - for example:
Step 9 Accept your changes.
Step 10 Repeat steps 8 and 9 to include any more source fields to be used in the calculation. Each time you accept a new source field you'll see the sequence that they will be processed when this transform function runs - for example:
Fields are processed in the sequence that they are added here.
Step 11 Having added all required source fields to be calculated, accept changes:
...then save the function:
Step 12 Ensure that the target field for this mapping row is set as required, then save the map shape. Next time the process flow runs, the mathematical operation will be performed for the given source fields and the total value is pushed to the defined target field. The example below shows an incoming payload before and after the math transformation is applied:
The pad transform function is used to pad an existing string of characters to a given length, using a given character. You can apply padding to the left (i.e. immediately before the existing string), to the right (i.e. immediately after the existing string), or both (immediately before and after, as equally as possible).
The payload item below contains a string that's 8 characters long:
If we apply padding to a length of 20
using a *
character to the right
, the result would be:
Here, we have an extra 12 * characters to the right, giving a string length of 20. However, if we apply the *
character to both
, the result would be:
Now the padding is applied with 6 characters to the left of the original string and 6 characters to the right.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select pad from the string section:
Step 5
Click in the direction
field and select where you would like padding to be applied:
You can apply padding to the left
(i.e. immediately before the existing string), to the right
(i.e. immediately after the existing string), or both
(immediately before and after, as equally as possible).
Step 6
In the length
field, specify the number of characters that you'd like the final (i.e. transformed) string to be - for example:
Step 7
In the pad character
field, specify the character that you'd like to use for padding - for example:
If you want padding to be applied with spaces, press the space bar once in this field.
Step 8 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform.
The replace transform function is used to replace an existing source string value with either:
An alternative string value
An empty value
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select replace from the string category:
Step 5 Update search and replace fields with your required values:
For the replace field, you can enter another string or leave the field blank to replace the source with an empty value.
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The round number transform function is used to change the number of decimal places for a number value. For example:
...might be changed to:
With the round number transform you can specify the number of decimal places that should be applied to incoming numeric values.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select round number from the number category:
Step 5 Move to the decimal places field and enter the required number of decimal places required for transformed values - for example:
Step 6 Click the add field button:
Step 7 Click in source fields and select the source field to be used for this transform:
Step 8 Accept your changes (twice).
Step 9 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
The script transform function is used to apply an existing custom script to the source value, and the updated field value is pushed to the target field.
Make sure that the required script has been created and tested before applying it as a transform function.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select script:
Step 5 Click in the script version field and select the script/version that you want to apply for this field transformation - for example:
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select/update the target field and then the mapping in the usual way.
The run flow shape is used to call one process flow from another, so you can run process flows in a chain. For example, you might have a process flow that receives data from a webhook, applies filters and then hits a run flow shape to call another flow with that data.
Default behaviour is for the payload from the end of the calling process flow to be sent into the called process flow for onward processing. However, when configuring a run flow shape you have the option to add a manual payload - in this case, your manual payload will be sent into the called process flow.
The run flow shape also allows you to choose whether any variables associated with the called process flow should be applied.
If you don't configure a manual payload in the run flow shape, the final payload from the calling process flow is always sent into the called process flow.
The deployed version of a process flow is always used.
You cannot create a recursive process flow loop - for example, if Process Flow A calls Process Flow B, you cannot then call Process Flow A from Process Flow B.
Step 1 In your process flow, add the run flow shape in the usual way:
Step 2 Click in the flow field and select the process flow that you want to call/run:
You'll only see enabled & deployed process flows here.
Step 3 If your selected process flow is associated with any process variables, these are shown - you can choose to enable or disable these:
Step 4 If you want to pass a manual payload into this process flow, toggle the specify payload manually option ON and paste the required payload into supplied payload field:
The manual payload can be any format - JSON, XML, plain text, etc.
Step 5 Save the shape.
The round date transform function is used to round source dates to either the start or end of the day, where:
start of day
changes the time to 00:00
for the received date
end of day
changes the time to 23:59
for the received date
So, you can round a given source date before sending the rounded value into a given target field.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select round date:
Step 5 Choose your required rounding:
Step 6 Accept your changes and save the transformation - at this point your mapping row is displayed without a target. From here, you can go ahead and add a target field:
The split shape is used to split-out a given element of a payload at a given data element. When you split data, the specified element (including any nested elements) is extracted for onward processing.
For example, your process flow might be pulling customer data from a source connection, but you need to send address details to a different endpoint. In this case, you'd use the route shape to create two different routes, mapping just customer data down one, and splitting out addresses for the other.
Step 1 In your process flow, add the split shape in the usual way:
Step 2 Select a source integration and endpoint to determine where the incoming payload to be split originates:
Step 3 Move down to the level to split section and use the dropdown data path to select the required data element to split - for example:
Remember - any data (including nested data) within the selected element will be split out into a new payload.
Step 4 If required, you can add a wrapper key. This wraps the entire payload in an element of the given name - for example:
...would wrap the payload as shown below:
Step 5 Save the shape.
The set variables shape is used to set values for flow variables and/or metadata variables at any point in a flow.
When defining variable values you can use:
static text (e.g. blue-003
)
payload syntax (e.g. [[payload.productColour]]
)
flow variable syntax (e.g..{{flow.variables.productColour}}
)
Step 1 In your process flow, add the set variables shape in the usual way:
Step 2 Access shape settings:
Step 3 Click the add new variable option associated with the type of variable that you want to define - for example:
Step 4 Options are displayed for you to define the required variables and their values. How these are displayed depends on the type of variable you've chosen to add:
Step 5 Once variables are accepted they're added to the settings panel (you can edit/delete as needed):
Step 6 Save the shape.
The trigger webhook option can be used if you want to trigger a process flow whenever a given event occurs in your third-party application.
How you use webhooks is driven by your business requirements, and the capabilities of your third-party application. For example, your third-party application might send a webhook which includes a batch of orders to be processed in the body
, or the webhook body
might simply contain a notification message indicating that orders are ready for you to pull.
Patchworks webhook URLs are generated in the form:
For example:
The {{webhook_id}}
is a Patchworks signature which is generated as a random hash (that doesn't expire). This provides built-in authentication for our URLs however, they should still be kept private.
The default response for a successful webhook trigger is a status code of 200
, with the following body:
Follow the steps below to add a new webhook trigger.
Step 1 Click the settings icon associated with the trigger shape in your process flow:
Step 2 Click the add new webhook button:
...a unique Patchworks webhook URL is generated:
Step 3 Copy this URL and paste it into your third-party application.
The documentation for your third-party application should guide you through any required setup for webhooks.
Step 5 Build the rest of your process flow as needed to handle incoming data from your defined webhook(s).
If required, you can change the default response for your webhook by selecting the 'edit' icon associated with the URL - for example:
Here, you'll find options to select an alternative status code and specify new body text:
Here you can:
Use the status code dropdown field to select the required response code.
Enter the required text in the body field.
Select the required format for your body content (choose from JSON, XML or Plain text).
When you choose to add a webhook to a process flow , a is auto-generated. This URL must be added to your third-party application, so it knows where to send event data.
For webhooks to be received, a process flow must be and .
If required, you can .
Step 4 If you want to customise the response for your webhook, click the edit icon associated with the URL and make required changes. For more information please see .
Step 6 Make sure that your process flow is and - webhooks will not be received if this isn't done.
The track data shape can be used to track processed data, based on field paths that you define. When data passes through a track data shape, values associated with your defined field paths are tracked, which means they are added to the tracked data panel on the Process Flow Home page.
For example, you might choose to track all customer_id
values that pass through the shape, so at any time you can quickly check if a given customer record has been processed:
Data that is tracked via the track data shape is retained for one year.
The track data shape works with incoming payloads from a connection shape, and also from a manual payload, inbound API or webhook.
JSON payloads are supported.
Flow and metadata variables are supported when defining field paths.
To add and configure a new track shape, follow the steps below.
Step 1 In your process flow, add the track data shape in the usual way:
Step 2 You can now configure the track data shape. How you do this depends on whether the data to be tracked is being added to the process flow via a connector shape, or from a non-connector source (such as a manual payload, inbound API request, or webhook):
If you have defined custom scripts for use in process flows, use the script shape to select a script to apply at a given point in a process flow.
You can use any version of a script which has been saved and deployed.
Creating a custom script is an advanced feature which requires some in-house development expertise.
Step 1 In your process flow, add the script shape in the usual way:
Step 2 You're prompted to select an existing script:
Step 3 Select the script that you want to use at this point in the process flow:
The list of available scripts only includes scripts which are currently deployed for use.
Step 4 The latest deployed version of the script is added to the shape - for example:
Code is displayed in view-mode. If you need to change the script, save your shape now and then use the left-hand navigation bar to access process flows > custom scripts.
Step 5 Unless you have a specific reason to do otherwise, we advise using the latest version of scripts. However, if you do need to use a previous version of the script, select the 'versions' dropdown field to make your selection - for example:
Step 6 Save the shape:
To view/change the selected script for an existing script shape, click the associated 'cog' icon:
From here, the existing script is displayed - you can either select a different script, or a different version of the existing script:
Remember that the script code can't be changed here. If you need to change the script, save your shape now and then use the left-hand navigation bar to access process flows > custom scripts.
A script will time out if it runs for more than 120 seconds.
All process flows must begin with a trigger - it’s this that determines when and how frequently the process flow will run. For this reason, all new process flows are created with a trigger shape already in place. You should edit this shape to apply your required settings:
Having accessed settings for a trigger shape, you can select the required trigger type - this determines any subsequent options that are displayed:
Trigger schedule options are used to schedule the associated process flow to run at a specified frequency and/or time. Here, you can use intuitive selection options to define your requirements or - if you are familiar with regular expressions - use advanced options to build your own expression.
Trigger schedules are based on Coordinated Universal Time (UTC).
Schedules can be defined based on the following occurrences:
Define a schedule to run every x
minutes - for example:
Define a schedule to run every x
hours - for example:
Define a schedule to run on selected days of the week at a given start time - for example:
Use the every dropdown list for quick daily presets, or define custom settings:
Define a schedule to run on selected days of the month or weeks, for selected months, at a given start time - for example:
Use the every dropdown list for quick monthly presets, or define custom settings:
If you are familiar with cron expressions, you can select this option to activate the cron expression field beneath and enter your required expression directly:
Patchworks supports the 'standard' cron expression format, consisting of five fields: minute, hour, day of the month, month, and day of the week.
Each field is represented by a number or an asterisk (*) to indicate any value. For example:
0 5 * * *
would run at 5 am, every day of the week.
Extended cron expressions (where six characters can be used to include seconds or seven characters to include seconds and year) are not supported.
Follow the steps below to add a new trigger schedule.
Step 1 To add a new schedule, click the add new schedule button:
Step 2 Select an occurrence.
Step 3 Define your required settings for the occurrence.
Step 4 Click save to save this schedule. The schedule is added to the shape - for example:
You can add a single schedule, or multiple schedules. When you add multiple schedules, ALL of them will be active.
As you work with shapes in process flows, you'll be used to updating shape settings with required values. Mostly, you'll define static values - for example, selecting or entering a data item to be used as a de-dupe key, entering a key name for an add to cache shape - there are dozens of settings that you might configure when building process flows.
However, there may be times where the value of a field can't be defined as a static value because it needs to be resolved dynamically, based on data received from the incoming payload and/or defined for the process flow as a whole. This can be achieved using:
Lots of our process flow shapes include settings where you can enter a static value, or provide a variable/parameter to be resolved dynamically. Please refer to our shapes documentation for specific guidance for each shape.
Using a custom script, it's possible to add metadata to a payload in the form:
For example:
There may be times when you need to define variables or parameters for a process flow step using values from the incoming payload metadata. This can be achieved using a specific payload syntax in the process flow connection shape.
Payload metadata cannot exceed 10240 bytes. Exceeding this limit will cause the process flow to fail on the associated step.
Payload metadata can be accessed via a variable or parameter field using the syntax below:
For example:
There may be times when you need to define variables or parameters for a process flow shape which resolve dynamically, based on given values from the incoming payload.
For example, you might have a list of customer IDs coming in from a source (for example, an inbound API job), and need to match these IDs with payload data for a subsequent connection shape, in order to create customer records. This would be achieved using a specific payload syntax when defining variables in the connection shape.
To pass in a variable or parameter value from the incoming payload, use the syntax shown below:
...where schema notation
should be replaced with the relevant notation path for the required field in the incoming schema. A payload variable can be defined on its own, or combined with static text. For example:
If necessary, you can combine payload variables with metadata and/or flow variables - for example:
The [[payload]]
variable supports non-JSON payload data types - for example, raw text, CSV, XML - whatever you pass in will be output.
To show how this works in principle, some examples are detailed below:
However, it's important to note that the required settings will depend on the data schemas used for your selected connection endpoints.
At the most basic level, your incoming data might contain items which aren't nested - for example:
In this case, our variable would be defined as:
...as shown below:
The result for our example would be:
An incoming payload might contain the required element in an array and you want to target all items within it - for example:
In this case, we can define a variable to target the required array. For example:
...will produce a comma separated list of associated customerID
values. The result for our example would be:
An incoming payload might contain the required element in an array and you only want to target a single item - for example:
In this case, we can define a variable to target the required array item. For example:
...will target the first item in the array and return the associated customerID
value. The result for our example would be:
Whereas:
...will target the second item in the array and return the customerID
value. The result for our example would be:
Let's take our example below:
We've seen how we can target specific items in an array, and how we can list all items in an array - but what if we wanted to target all array items individually?
In this case, we would add a flow control shape to split the incoming payload into batches of 1
at the required data element level (in the case of our example this would ne users
:
This will result in three payloads, each with a single customerID
and name
. For example:
In the following connection step (where the variable/parameter is defined), we can add our payload syntax for the variable/parameter. For our example this would be:
...as shown below:
Here, we need to specify the * because each of our batched payloads is wrapped in an array. For each payload generated from our example, the result is that the associated customerID
is taken as the variable value.
Flow variables provide the ability to define variables at the process flow level, and then reference these values throughout the entire process flow. You set a flow variable once, and it is applied throughout the entire process flow, wherever it is referenced.
When flow variables are modified - either manually or via a script - those updates are applied anywhere in the process flow where they are referenced, automatically.
Please see the following pages:
If your process flow includes a that's configured for an endpoint where a variable can be entered, you'll probably be used to entering a static value to be applied for that step - for example:
You might also be familiar with . However, flow variables provide another level of flexibility.
You can also reference flow variables in (which means you can manipulate these values however you need) and also in .
Before you start working with flow variables, there are a couple of important points to understand regarding and .
Flow variables are version-specific. For example, if you add flow variables to the current draft version and later restore an inactive version to draft, any defined flow variables won't be present. So, make sure you're updating the correct version of a process flow. For more information, please see our page.
If you update a flow variable , those updates persist for the duration of the flow run. Once the process flow has been completed, default values are restored.
When defining variables you can 'mix and match' , and flow variables. For example:
If required, you can reference (and therefore manipulate) flow variables in custom scripts. The possibilities here are only as limited as your development expertise however, a simple example might be where you want to generate a running count of order lines, to be output to a total
field in a target system.
To achieve this, you would:
Add a flow variable named running_total
to your process flow settings.
Write a custom script which loops over each received order line and updates the running_total
variable as it goes.
Add the custom script to your process flow via a script shape.
Add a map shape to your process flow and include a rule which maps a custom string transform for {{flow.variables.running_total}}
, to the total
field in the target system.
To reference flow variables in a custom script, the required syntax is as follows:
In all cases, the variableName
element should be replaced with the actual flow variable name. For example:
The example script below takes a flow variable named customerID
and sets the value to 1234567
:
So, wherever the customerID
flow variable is referenced in a process flow, its value would be set to 1234567 when the process flow runs.
When you update flow variables via a script, those updates persist for the duration of the flow run. Once the process flow has completed, default values are restored.
To add a shape to a process flow, click the + icon at the point you want to place it - for example:
...then choose the type of shape to add:
Depending on the shape, the settings panel will either open immediately so you can provide details before the shape is added to the canvas, or the shape is added to the canvas and you can update settings when you're ready.
To update settings for an existing shape, click the associated 'cog' icon - for example:
The settings panel is displayed, so you can configure the shape as required - don't forget to save changes.
To remove a shape, click the associated 'cog' icon - for example:
...then click the delete option in the settings panel - for example:
The steps required to reference flow variables in a process flow can be summarised in two stages:
Any flow variables that you want to reference from process flow shapes should be added as variables within the process flow settings. To do this, follow the steps below.
Step 1 Access the process flow that you want to update and make sure that you're switched to the required version.
Typically, you would update the draft version, then deploy changes when you are ready.
Step 2 Select settings (the cog icon) from the actions bar:
Step 3 Look for the variables section in the flow settings panel - for example:
Step 4 Click the add new variable button:
Step 5
In the name field, enter the name (i.e. the API parameter name) of this variable. For our example, the variable is named customerID
:
Step 6 Click in the select a type field and select the data type for this variable:
Step 7 Enter the required value to be used wherever this variable is found in the process flow - for example:
Step 8 Add all required flow variables in the same way, then save changes.
Having defined your required flow variables, they can be referenced from process flow connection shapes, wherever a variable field is present. So, if your connection shape is set to use an endpoint which requires/allows variables to be applied, you will see corresponding variable fields in the connection shape settings.
The example below shows how this works. A GET single order
endpoint has been configured to expect a customerID
variable, and then how this variable is surfaced in connection shape settings when this endpoint is used:
When defining variable values for a connection shape, you can enter a static value, or obtain values dynamically from a payload, or reference an existing flow variable. The steps below show how to reference a flow variable.
You can also reference flow variables in custom scripts (which means you can manipulate these values however you need) and also in field mapping transformations.
Step 1 In your process flow, access settings for the connection shape that you want to update with a flow variable:
Step 2 Look for the variables section in the settings panel - for example:
To use a flow variable here, the expected variable must correlate with a variable that you added in stage 1. Notice that the example above is expecting a Customer ID variable, which correlates with the customerID
flow variable that we added in step 5 of stage 1.
Step 3 Use the syntax below to reference a flow variable:
...where the variable
element should be replaced with the name of the flow variable defined in process flow settings (stage 1). Using our example, this would be:
Step 4 Save changes. Now when this process flow runs, the value defined in process flow settings will be passed in for this variable.
Flow variable values can also be updated by custom scripts. For further information please see Referencing flow variables in custom scripts.
To add a new transformation for a field mapping, you start by adding a new transformation and then build the required functions. To do this, follow the steps below:
Step 1 Access the required process flow and then edit the map shape to be updated with a transform:
Step 2 Find the mapping that you want to update, then click the transform icon (between source and target elements). For our example, we're going to add a prefix to the 'id' field:
Step 3 Click the add transform button:
Step 4 Use the select a function field to choose the type of function that you need to use (functions are organised by type):
Step 5 Depending on the type of function you select, additional fields are displayed for you to complete. Update these as required - for our example we're adding some text to be added as a prefix:
Step 6 Now we need to confirm which source field this transform field should be applied for - click the add field button:
Step 7 Select the required field:
In straightforward scenarios, this will typically be the same source field as defined for the mapping row. However, more complex scenarios may prompt multiple options here - for example, if you apply multiple transforms to the same mapping.
Step 8 Accept your changes.
Step 9 Add more fields if necessary.
Step 10 When you're satisfied that all required fields have been added, accept changes and then save the shape settings.
The same process flow
Other process flows for your company profile
Other process flows for any of your linked company profiles
To export the configuration for a map shape, follow the steps below:
Step 1 Access the required process flow, then click the settings icon for the map shape that you want to export:
Step 2 Click the export map button:
Step 3
The configuration is exported and saved to your default downloads folder. The filename is always map.json
.
To import a mapping configuration into a map shape, follow the steps below:
Step 1 Access the required process flow, then click the settings icon for the map shape that you want to update:
You can import a mapping configuration into a new map shape, or into an existing one. If you import a configuration into an existing map shape, any existing mappings will be overwritten.
Step 2 Click the import map button:
Step 3 Navigate to the downloaded map configuration file on your local drive, then select it.
The default filename for exported map configuration files is map.json
.
Step 4 Having selected a valid mapping configuration file to import, the import completes immediately.
The flexibility of process flows means that there's no 'one size fits all' approach - everyone's requirements are different, and the scope is huge. This level of flexibility is a great advantage but on the flip side - where do you start?
Here, we outline the bare bones of a process flow so you know what to consider as a minimum when getting started for the first time.
A scratchpad area will be available soon. In the meantime, we suggest registering for a sandbox account and experimenting there.
Make sure you create instances with credentials for your third-party application sandbox accounts, rather than live ones!
In their simplest form, process flows are defined to receive data from one third-party application and send it into another third-party application, perhaps with some data manipulation in between. Key elements are summarised below.
Process flows allow you to build highly complex flows with multiple routes and conditions. Here, we're considering an entry-level scenario to highlight key items as you get started with process flows.
Patchworks process flows are incredibly flexible. With a range of shapes for receiving, paginating, manipulating, batching, splitting, caching and sending data, you can build highly complex flows in a matter of a minutes.
With this in mind, it's important to understand how data flows through shapes.
In the simplest of scenarios, a process flow receives a single payload of unpaginated data and this flows all the way through to completion with no manipulation or batching - one payload is received, processed, and completed.
However, if your incoming data is paginated and/or you introduce shapes capable of generating multiple payloads, it's important to understand how these pass through the flow. Essentially, any payloads that a shape outputs are added to a 'bucket' and it's that bucket that is then passed to the next shape.
So, all payloads from one shape are passed to the next shape in the same context - they don't pass down the entire flow individually.
The animation below shows how this works.
With some exceptions (detailed below), a further three attempts will be made if a process flow shape fails. Exceptions are summarised below:
In the simplest scenario, your given cache key would be a static value (e.g. customers
) and you would use this to load the entire cache (containing perhaps tens, hundreds, even thousands of items) where required. But what if you want to load a specific item from a cache, rather than the whole thing?
This is where dynamic cache keys are so useful.
To load data from a cache, you configure a load from cache shape with the required cache
and a single cache key
. All data associated with your given cache key
is loaded.
Consider the example incoming payload below, where four records are cached with a static cache key
with a value of customers
:
If we were to configure a load from cache shape to access the customers
cache key, all four records would be loaded.
So, in order to load specific items from a cache, the incoming data must be added to a cache in such a way that we can easily target individual items. We need an efficient way to take incoming data, batch it into single-record payloads and add each of these to the cache with its own unique, identifying cache key - i.e.:
We can achieve this as follows:
When you specify a dynamic variable as the cache key, the value for that variable is injected into the key. To prevent the case where large amounts of data are passed into the key, there is a character limit is 128 characters.
These steps assume that you have already defined a flow control shape (or some other means) to ensure that the add to cache shape receives single-record payloads.
Step 2 In the add to cache shape settings, choose to create cache:
Step 3 Set the cache level and name as required and save changes.
Step 4 Select the cache that you just created - for example:
...where schema notation
should be replaced with the notation path to the first occurrence of the required element in the payload which should be used to form the cache key. If required, you can also include a static prefix or suffix. For example:
The output of the payload variable will be used as the cache key.
Step 6 Save the add to cache shape settings.
Field transformations can be defined to change the value of a data field pulled from a source system before it is sent to its target. A transformation is comprised of one or more .
This page explains how to for a field mapping, and .
For a summary of available transform functions please see the section.
For information about adding a field transformation using a cross-reference lookup table, please see our section.
If you're building multiple process flows with similar requirements for field mappings, you can the configuration for a map shape, and then that configuration into another map shape.
When a map shape configuration is exported, a JSON file is generated and saved (automatically) to the default download folder for your browser. All and associated are exported. You can then import this file to any other map shape within:
Process flow can be associated with three version types: draft, deployed and inactive. Before you get started building process flows, we advise reading our page to make sure you understand how this works.
If a 'pull' is configured to use an endpoint that paginates data, the connection shape outputs each page in its own payload.
All shapes (except the ) have a set timeout of 30 minutes. If processing is not completed within this time, the shape fails.
The timeout for a connection shape is configurable via .
Shape | Number of retries |
---|
The ability to update some settings will vary, depending on the . For example, you can't add flow variables to a deployed version.
# | Item | Summary |
---|
When an is dropped into a process flow, the entire incoming payload is cached and associated with the given cache key. Depending on the cache type, you can load this cache later in the same flow or in a different flow.
Action | Outcome |
---|
is an easy way to batch incoming data into single-record payloads, however you may prefer an alternative approach. The important point is that the add to cache shape must receive single-record payloads - how you achieve this is up to you.
Any combination of , and variables can be used to form cache key names.
Follow the steps below to configure an add to cache shape with a for generating dynamic cache keys.
Step 1 , where required.
For more information on these fields please see the page.
Step 5 Move down to the cache key field and enter the required key. Here, you use standard to define your target data element:
Our example uses dynamic payload variables however, you can also use metadata variables and/or flow variables. For more information please see section.
Cached data can be loaded via our load from cache shape. Please refer to the section for more information.
Every process flow starts with a trigger - when should this flow run? In process flows, trigger options are defined using the shape.
Have you for this application?
Have you (or multiple instances) of this connector?
In process flows, a data source is defined by adding a shape and selecting a connector instance and endpoint.
Do you need to refine data in the current payload, before it's processed any further? In process flows, filters are defined using the shape.
Does the data that you pull (i.e. the payload) require advanced manipulation before processing continues? If it does, and you have development expertise in-house, you can write and apply .
In process flows, existing are added using the shape. This is an advanced feature - very often, standard are enough to sync data as needed.
Do you need to manipulate source field values before they are synced to the destination? If yes, will standard handle this, or are required?
In process flows, field mappings and (if required) are applied using the shape.
Have you for this/these application(s)?
Have you (or multiple instances) of this/these connector(s)?
In process flows, the data destination is defined by associating an instance with a shape.
1 |
2 |
1 |
1 |
1 |
1 | Process flow name & description | The name displayed for this process flow throughout the system. Optionally, you can include a description. |
2 | Enabled toggle button |
3 | Use queued time | Some process flow steps (connectors, filters, set variables, transforms, etc.) can be configured to use dynamic/relative dates, where the date is relative to the time the process flow is initialised. To prevent cases where records are missed because they were added between the time a flow was initialised and the time it actually ran, the Use queued time process flow setting can be used. This allows you to choose whether any relative dates should be based on the time that the process flow is initialised or the time it is queued. This option defaults to |
4 | Labels |
5 | Email failure notifications |
6 | Deploy |
7 | Variables |
8 | Versions |
cache: customerData cache key: customer-1000000001
cache: customerData cache key: customer-1000000002
cache: customerData cache key: customer-1000000003
cache: customerData cache key: customer-1000000004
The incoming payload is batched into multiple payloads - one payload per data element (e.g. one order per payload, one customer per payload, one product per payload, etc.). |
Configure the add to cache shape and specify a payload variable as the cache key, where the variable looks for the first occurrence of a uniquely identifying element in the payload (typically an id or reference number). | The add to cache shape receives and caches single-record payloads from the flow control shape. The cache key for each payload is generated dynamically by resolving the payload variable from each incoming payload. |
You can add notes to any shape in a process flow. This can be useful for many reasons - for example, to keep track of why certain shapes are used in more complex flows; to add reminders for any updates needed in future, or perhaps to leave guidance for another user.
You can add a single note or multiple notes to each shape - there's no limit
Any single note cannot exceed 64kb
Notes are not encrypted - we strongly advise against adding sensitive information such as API keys, login credentials, payload data, etc.
Notes are associated with a shape, not the process flow version, so if you add notes to shapes in a draft process flow and then deploy that process flow, the notes remain in the deployed version.
To access shape notes, click the notes icon for the required shape:
Any existing notes for this shape are displayed:
From here, you can click any existing note to open it in the notes editor, or use the add note button to add a new note.
To add a new note for a shape, click the shape notes icon for the required shape and then click the add note button:
From here you can add required content via the notes editor, then save your note.
Notes are added using a markdown editor, which includes a preview pane for rendered output:
Standard markdown formatting can be used, or you can use the notes editor toolbar to apply formatting and add elements such as code blocks, tables and flow charts.
Click here to view a markdown cheat sheet.
When adding/editing a note, you can apply a colour - perhaps useful to categorise different types of notes with a visual cue.
You can also mark a note as private (so only you can see it), or make it available for all users in your organisation. Once a note is saved, the rendered version is displayed in the notes panel.
Notes persist to subsequent versions of a process flow. For example, if you add two notes to a draft
process flow, then deploy that flow, the deployed
process flow will have two notes.
If you go on to add one more note to the current draft
version and then deploy this draft, the deployed
process flow will have three notes. The inactive
version will have two notes.
For more information on process flow versions, see our Process flow versioning page.
The cast boolean to string transform function is used to change the data type associated with a source field from boolean
to string
.
A boolean
data type can have only two possible states: true
or false
.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select cast boolean to string from the other category:
Step 5 Click the add field button:
Step 6 Click in source fields and select the source field to be used for this transform:
Step 7 Accept your changes (twice).
Step 8 Save the transform. You'll notice that the transform icon is updated to indicate that a transform function is present for the mapping row - for example:
This is preview documentation for a feature that's due for release very soon!
Event connectors can trigger process flows by listening for events that are published to message queues/topics by a message broker (e.g. RabbitMQ).
Once an event connector is configured, it becomes available for use as a process flow trigger.
New event connectors are available for selection in process flow trigger shapes as soon as they are saved successfully.
ALL messages published to selected queues/topics are passed through to the process flow.
Follow the steps below to add an event connector as a process flow trigger.
Step 1 Click the settings icon associated with the trigger shape in your process flow:
Step 2 Click the add new event listener button:
...event options are displayed:
Step 3 Select a broker from the list (all configured event connectors are available for selection):
Step 4 Select a queue from the list - all configured message queues/topics for the selected broker (i.e. event connector) are available for selection:
Step 5 Save the shape settings.
The custom string transform function is used to map a given string to a target field. This string can be static, or you can reference flow variables and cached data.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when a custom string transformation is used we don't select a source field - the custom string transformation is our data source.
Step 1 In your process flow, access settings for your map shape:
Step 2 Select the add transform button for the required mapping rule - for example:
Step 3 Click the add transform button:
Step 4 Click in the name field to access a list of all available transform functions, then select custom string:
Step 5 Move down to the custom string field and enter your required text or variables - for example:
For more information about referencing flow variables in a custom string, please see our Referencing flow variables in field mapping transformations page. For more information about referencing cached data in a custom string, please see our Referencing a cache in mapping transformations page.
Step 6 Accept your changes:
...then save the transformation:
Step 7 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 8 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the custom string (or associated values from variables) will be mapped to the given target field.
You can reference flow variables in field mapping transformations, using a custom string
transformation in the map shape.
Most typically, the custom string
transformation type is used to specify some custom text to be applied in a target field. For example, you might choose to add a particular sales rep's name to a reference
field for all orders being synced.
However, if your process flow is using flow variables, you can reference these in custom string transformations, instead of static text. When the process flow runs, the flow variable value is mapped to your given target field. If you later decide to update flow variables (either manually or via a custom script), those updates are mapped next time the process flow runs.
The steps required to achieve this can be summarised in two stages:
Any flow variables that you want to reference in field transformations should be added as variables within the process flow settings. To do this, follow the steps below.
Step 1 Access the process flow that you want to update and make sure that you've switched to the required version.
Typically, you would update the draft version, and then deploy changes when you are ready.
Step 2 Select settings (the cog icon) from the actions bar:
Step 3 Look for the variables section in the flow settings panel - for example:
Step 4 Click the add new variable button:
Step 5
In the name field, enter the name (i.e. the API parameter name) of this variable. For our example, the variable is named rep
:
Step 6 Click in the select a type field and select the data type for this variable:
Step 7 Enter the required value to be used wherever this variable is found in the process flow - for example:
Step 8 Save changes.
Having defined your required flow variables, they can be referenced in custom string
field mapping transformations. Typically, this will be for scenarios where you want to map the output from a flow variable into a target field.
If you've added/updated a map shape before, you'll be used to selecting a source field and a target field. However, when we map a flow variable value to a target field, we don't select a source field - the custom string transformation is our data source.
Step 1 In your process flow, access settings for the map shape that you want to update with a flow variable:
Step 2 Click the add mapping rule option - for example:
Step 3 Click the add transform button:
Step 4 Click the add transform button:
Step 5 Click in the name field to access a list of all available transform functions, then select custom string:
Step 6 Move down to the custom string field and enter the following value:
...where the variable
element should be replaced with the name of the flow variable defined in process flow settings (stage 1) that you want to reference. Using our example, this would be:
For example:
You can also reference metadata variables in the same way, using the meta syntax:
[[meta.unique_key]]
Step 7 Accept your changes:
...then save the transformation:
Step 8 Now you can select a target field in the usual way - for example:
...then:
...then:
Step 9 Once your mapping is complete, the row should be displayed without a source field - for example:
From here you can save changes or add more mapping rules as needed. Next time the process flow runs, the value of this flow variable will be mapped to the target field.
The route shape is used for cases where a process flow needs to split into multiple paths, based on a given set of conditions. Conditions are defined based on any fields found in the schema associated with your source data, so the scope for using routes is huge.
To define multiple routes for your process flow you must:
Add a route shape.
Configure the route shape to add required routes and conditions.
Build the flow for each configured route by add shapes in the usual way.
By default, multiple routes are processed in parallel when a process flow runs.
When you add a route shape to a process flow, the shape is added to your canvas with two placeholder route stems - for example:
To configure these routes (and add more if needed) click the 'cog' icon associated with this shape to access route settings.
Follow the steps below to configure route data for a route shape.
Step 1 Select a source integration and endpoint to determine where the incoming payload for the route shape is coming from - for example:
Step 2 Select a routing method to determine what should happen if a payload record matches conditions defined for more than one route:
These options are summarised below:
Step 3 Click the 'edit' icon associated with the first route:
Step 4 Enter your required name for this route - it's a good idea to ensure this provides an indication of the route's purpose. For example:
Step 5 Click the add new filter button:
Step 6 Filter settings are displayed:
From here, you can select a field (from the data schema associated with the source endpoint selected in step 1) - for example:
Alternatively, you can toggle the manual input option to ON and add syntax for dynamic variables:
The manual data path field supports metadata variables.
Step 7 Use remaining operator, type and value options to define the required filter.
Step 8 Use the keep matching? toggle option to choose how matched records should be treated:
Here:
If the keep matching? option is toggled OFF, matched records are removed from the payload before it moves on down the flow for further processing.
If the keep matching? option is toggled ON, matched records remain in the onward payload, and all non-matching records will be removed.
Step 9 Click the save button (at the bottom of the panel) to confirm these settings. The new rule is added for your first route - for example:
Step 10 Repeat steps 5 to 9 to add any additional rules for this route. When you've added all required rules, click the save/update button at the bottom of the panel.
Step 11 Repeat steps 3 to 9 to configure the second route.
Step 12 Add any additional routes required using the add route button. Each time you add a new route, the canvas updates with an additional route stem from your route shape.
Step 13 Save your changes.
Having defined all required routes and associated conditions, the route shape on the canvas will have the corresponding number of route stems, ready for you to add shapes. For example:
Click the + sign for each branch in turn, then add the required shapes for each route flow.
Use this option to .
With the introduction of for process flow runs, all process flows are added to a queue when they are initialised (either by a defined , or a ). This means that the time a flow is initialised is not the same as the time the flow actually runs - sometimes the run will be almost instant, but in busier periods there may be some minutes between starting and running a flow.
View and update associated with this process flow. You can remove an existing label, add new labels from the dropdown list, or create and add a brand-new label.
Use the dropdown list to select a notification group to receive an .
Use this option to .
Define and then reference these values throughout the entire process flow.
All existing versions of a process flow are displayed. From here you can select any version to view the flow at that point in time. You can also choose to , and to .
Place a shape immediately before the add to cache shape and configure it to create batches of 1 at the appropriate level for your data.
Option | Summary |
---|---|
Follow all matching routes
If a record matches defined conditions for multiple routes, send it for onward processing down all matched routes.
Follow first matching route only
If a record matches defined conditions for multiple routes, send it for onward processing down the first matched route, but no more.