Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The connector shape is used to define which connector instance should be used for sending or receiving data, and then which endpoint.
Any connector instances that have been added for your installed connectors are available to associate with a connector shape. Any endpoints configured for the underlying connector will be available for selection once you've confirmed which instance you're using.
When you add a connector shape to a process flow, the is displayed immediately, so you can choose which of your connector instances to use, and which endpoint.
To view/update the settings for an existing connector shape, click the associated 'cog' icon to access the - for example:
Follow the steps below to configure a connector shape.
Step 1 Click the select a source integration field and choose the instance that you want to use - for example:
Step 2 Select the endpoint that you want to use - for example
Step 3 Depending on how your selected endpoint is configured, you may be required to provide values for one or more variables.
Step 4 Save your changes.
Step 5 Once your selected instance and endpoint settings are saved, go back to edit settings:
Now you can access any optional filter options that are available - for example:
Step 6 The request timeout setting allows you to override the default number of seconds allowed before a request to this endpoint is deemed to have failed - for example:
The default setting is taken from the underlying connector endpoint setup and should only be changed if you have a technical reason for doing so, or if you receive a .
Step 7 Set error-handling options as required. Available options are summarised below:
Step 8 Set the payload wrapping option as appropriate for the data received from the previous step:
This setting determines how the payload that gets pushed should be handled. Available options are summarised below:
Step 9 If required you can set response handling options:
These options are summarised below:
Step 10 Save your changes.
Valid FTP commands for the SFTP connector are summarised in the following sections:
To configure a database connection, add a connector shape to your process flow and select the required source instance and source query for an existing :
Having selected a source instance, you'll know if you're working with a database connector because the subsequent field requires you to choose a source query
Retrieve a file, load its content into the process flow as a payload, then move the file to another directory on the remote server. If multiple files are retrieved, multiple payloads are generated (one payload per file).
Retrieve file(s), load content into the process flow as payload(s) and then delete file(s) from the remote server. If multiple files are retrieved, multiple payloads are generated (one payload per file).
Add content from target file(s) to the end of source file(s).
Copy file(s) from one directory to another directory on the remote server (so the file exists in both directories).
Rename file(s). This operation can be combined with moving the files.
Remove file(s).
put_non_empty
Upload file(s) only if it has content (i.e. avoid uploading empty files).
List files in a directory on the remote server. Information is loaded into the process flow as a single payload.
size
Get file sizes from a directory on the remote server. Information is loaded into the process flow as a single payload.
modified
Get the last modified date/time (UTC format) from a directory on the remote server. Information is loaded into the process flow as a single payload.
mime
Get/detect MIME type of file(s). A boolean value is returned for each file.
exists
Check if a file exists.
source endpointGenerally, database connector settings work on the same principles as 'normal' connectors but there are differences, depending on whether you're using a query that receives or sends data.
When a connector shape is configured with a receive type query (i.e. you're receiving data from a database), you'll see settings sections for variables, error handling, and response handling:
These options are summarised below.
Variables
Custom
Any query variables defined for the selected query are displayed here. Variables may be mandatory (denoted with an asterisk) so a value must be present at runtime, or optional.
Error handling
Retries
Sets the number of retries that will be attempted if a connection can't be made. You can define a value between 0 and 2. The default setting is 1.
Response handling
Save response in payload
Set this option to on to save the response from the completed operation IN the payload for subsequent processing.
This option provides the ability to access the response body via .
By default, the response is saved in a field named response however, when the save response in payload option is toggled on, you can specify your preferred field name.
When a process flow runs using a receive type query, received data is returned in a payload. You can view this data in logs - for example:
When a connector shape is configured with a send type query (i.e. you're sending data to a database), you'll see settings sections for variables, database, error handling, and response handling:
These options are summarised below.
Variables
Custom
Any query variables defined for the selected query are displayed here. Variables may be mandatory (denoted with an asterisk) so a value must be present at runtime, or optional.
Database
Override query
You can enter your own database query here which will be run instead of the source query already selected.
Before using an override query you should ensure that:
target columns exist in the database
target columns are configured (in the database) to accept null values.
Items path
If your incoming payload contains a single array of items, enter the field name here. In doing so, the process flow loops through each item in the array and performs multiple inserts as one operation.
For more information please see .
Error handling
Retries
Sets the number of retries that will be attempted if a connection can't be made. You can define a value between 0 and 2. The default setting is 1.
The items path field is used to run the associated query for multiple items (i.e. database rows) in a single operation, by looping through all items in a given array:
Here, specify the field name in your payload associated with the array to be targeted.
The items path field can't be used to target individual payload items. For this, you'd need an alternative approach - for example, you might choose to override query and provide a more specific query to target the required item.
When a process flow runs using a send type query, the default behaviour is for the response from your database to be returned as the payload. You can view this in logs - for example:

Retries
Sets the number of retries that will be attempted if a connection can't be made. You can define a value between 0 and 2. The default setting is 1.
Backoff
If you're experiencing connection issues due to rate limiting, it can be useful to increase the backoff value. This sets a delay (in seconds) before a retry is attempted.
You can define a value between 1 and 10 seconds. The default setting is 1.
Allow unsuccessful statuses
If you want the process flow to continue even if the connection response is unsuccessful, toggle this option on. The default setting is off.
Raw
Push the payload exactly as it is pulled - no modifications are made.
First
This setting handles cases where your destination system won't process array objects, but your source system sends everything (even single records) as an array. So, [{customer_1}] is pushed as {customer_1}.
Generally, if your process flow is pulling multiple records from a source connection but later pushing just a single record into a destination connection, you should set payload wrapping to first.
When multiple records are pulled, they are written to the payload as an array. If you then strip out a single record to be pushed, that single record will - typically - still be wrapped in an array. Most systems will not accept single records as an array, so we need to 'unwrap' our customer record before it gets pushed.
Wrapped
This setting handles cases where your destination system is expecting a payload to be wrapped in an array, but your payload contains a series of 'unwrapped' objects.
The most likely scenario for this is where you have a complex process flow which is assembling a payload from different routes.
Setting payload wrapping to wrapped will wrap the entire payload as an array object. So, {customer_1},{customer_2}{customer_3} is pushed as [{customer_1},{customer_2}{customer_3}] .
Save response AS payload
Set this option to on to save the response from the completed operation as a payload for subsequent processing.
POST
PUT
PATCH
DELETE
Save response IN payload
Set this option to on to save the response from the completed operation IN the payload for subsequent processing.
This option provides the ability to access the response body via payload variables. This can be useful for cases where an API returns a successful response despite an error - by inspecting response information from the payload itself, you can determine whether or not a request is successful.
By default, the response is saved in a field named response - for example:
However, when the save response in payload option is toggled on, you can specify your preferred field name - for example:
GET POST
PUT
PATCH
DELETE
Expect an empty response
Set this option to on if you are happy for the process flow to continue if no response is received from this this request.
POST GET









{
"products": [
{
"id": 4001,
"sku": "BAG-001",
"colour": "Blue",
"category": "Bags",
"quantity": 11
},
{
"id": 4002,
"sku": "BAG-002",
"colour": "Green",
"category": "Bags",
"quantity": 11
},
{
"id": 4003,
"sku": "BAG-003",
"colour": "Pink",
"category": "Bags",
"quantity": 12
},
{
"id": 4004,
"sku": "BAG-004",
"colour": "Red",
"category": "Bags",
"quantity": 10
}
]
}
{
"products": [
{
"accessories": [
{
"id": 4,
"sku": "BAG-003",
"colour": "Green",
"category": "Bags",
"quantity": 11
},
{
"id": 5,
"sku": "BAG-004",
"colour": "Pink",
"category": "Bags",
"quantity": 12
}
]
},
{
"footwear": [
{
"id": 6,
"sku": "SHOE-001",
"colour": "Black",
"category": "Shoes",
"quantity": 20
},
{
"id": 7,
"sku": "SHOE-002",
"colour": "White",
"category": "Shoes",
"quantity": 15
},
{
"id": 8,
"sku": "BOOT-001",
"colour": "Brown",
"category": "Boots",
"quantity": 10
}
]
}
]
}
{
"products": [
{
"accessories": [
{
"id": 4,
"sku": "BAG-003",
"colour": "Green",
"category": "Bags",
"quantity": 11
},
{
"id": 5,
"sku": "BAG-004",
"colour": "Pink",
"category": "Bags",
"quantity": 12
}
]
},
{
"footwear": [
{
"id": 6,
"sku": "SHOE-001",
"colour": "Black",
"category": "Shoes",
"quantity": 20
},
{
"id": 7,
"sku": "SHOE-002",
"colour": "White",
"category": "Shoes",
"quantity": 15
},
{
"id": 8,
"sku": "BOOT-001",
"colour": "Brown",
"category": "Boots",
"quantity": 10
}
]
}
]
}
INSERT INTO products (id, sku, colour, category, quantity) VALUES (:products.1.footwear.0.id, :products.1.footwear.0.sku, :products.1.footwear.0.colour, :products.1.footwear.0.category, :products.1.footwear.0.quantity){"id":123,"response":{"id":123}}
Response handling
Save response as payload
Set this option to on to save the response from the completed operation as a payload for subsequent processing.
Save response in payload
Set this option to on to save the response from the completed operation IN the payload for subsequent processing.
This option saves the response from your database, together with the payload that was sent in.
By default, the response is saved in a field named response however, when the save response in payload option is toggled on, you can specify your preferred field name.





The Patchworks FTP connector is used to work with data via files on FTP servers in process flows. You might work purely in the FTP environment (for example, copying/moving files between locations), or you might sync data from FTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an FTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different FTP server location
This guide explains the basics of configuring a connection shape with an FTP connector.
When you add a connection shape and select an FTP connector, you will see that two endpoints are available:
Here:
FTP GET is used to retrieve files from the given server (i.e. to receive data)
FTP PUT is used to add/update files on the given server (i.e. to send data)
Having selected either of the two FTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
For information about these fields please see our page - details are the same.


The get_and_move command retrieves the file content, loads it into the flow as a payload, and then moves the file to the specified directory on the remote server.
You can use the get_and_move command to in one step. In this scenario, the file content is loaded into the flow as a payload.
You can use the get_and_move command to , but this requires multiple steps rather than a single get_and_move operation. In this scenario, content from files is NOT loaded into the flow; instead, payloads will contain the filename.
Regular expressions are supported when
When , three fields should be updated:
get_and_moveWhen getting and moving all files from a directory, subfolders are not included.
FTP command followed by the target directory - i.e. where should the file(s) be moved?
The common root to source and target directories.
The source directory and file(s).

Our process flow is configured as follows:

In this flow, we need to retrieve content from a file named orders.json which is located in /myfiles/folderA on the remote server:
Then we want to move this file to /myfiles/folderA and keep the same filename.

Connector settings
Our SFTP shape is configured as follows:
Start looking for the file to get from the root, which is defined as: /myfiles/
Check the path for the file to get, which is defined as: folderA/orders.json

Payload & SFTP server
When the process flow is run, the payload for the SFTP shape shows the content retrieved from /myfiles/folderA/orders.json :
On our remote server, the file is gone from /myfiles/folderA/:
And now it can be found in /myfiles/folderB/:

Our process flow is configured as below:

In this flow, we need to get and move all files in /myfiles/folderF which start with 'old', on the remote server:
We are moving these files to /myfiles/folderG , which is currently empty:
We are keeping the same filenames.

Connector settings for SFTP shape 1
Our first SFTP connector step is configured with a list command, as follows:
Use the SFTP GET user pass endpoint.
Use list as the FTP command

Flow control settings
We use a flow control step to extract each file name into its own payload:
Here we create batches of 1, so we get one payload per file name.
When the flow runs, this shape outputs a multiple payloads, each containing a single filename. For example:

Filter shape settings
We use a filter shape to extract only the file names that we want to process. To achieve this, we define a single filter rule, as below:
Here we define the field name as 0, so the first value in our payload (bearing in mind we only have one field in each payload).
We set the filter type to string and the operator to regex, then provide our regular expression.
Our regular expression is set to /^old.*/i, so only files starting with 'old' will be extracted for onward processing.
When the flow runs, this shape outputs three payloads:

Connector settings for SFTP shape 2
Our final SFTP connector step is configured to get and move files with the names received from the previous filter shape:
Since we are updating the remote server (as opposed to retrieving files) the SFTP PUT user pass endpoint is selected.
The root

On our remote server, all processed files are removed from /myfiles/folderF/:
And now they can be found in /myfiles/folderG/:
Having loaded content from this file, move it to the path (from the root) which is specified immediately after the ftp command. This is defined as: get_and_move:folderB/{{current_filename}}
Look for files in the root, which is defined as: /myfiles/folderF
Since there's no specific file to target, we leave the path empty
When the flow runs, the SFTP shape outputs a single payload which contains all file names found in /myfiles/folderF, as an array:
Each one contains a single filename. For example:
/myfiles/folderFfolderGThe path defines which files are retrieved and moved. It's set as folderF/[[payload.0]] which means: look in folderF (from the root) for a filename resolved from the first value in the incoming payload. Keep in mind that this step repeats for each incoming payload from the previous flow control step - i.e. for each file.
The ftp command includes the get_and_move command, immediately followed by our target directory: copy:folderG/{{current_filename}} .
When the flow runs, this shape outputs a payload for each processed file, each one containing the file name. For example:




















The pluck command retrieves a file, loads content into the process flow as a payload, and then deletes the file from the remote server.
You can pluck a single file or multiple files.
Regular expressions are supported when defining files to retrieve with the get command.
When , two fields should be updated:
FTP command only: pluck
The root to the file(s) you want to pluck. Since the root only applies to one directory, you can enter the full path here.
The filename(s) to be plucked.

Our process flow is configured as follows:

In this flow, we need to pluck all files in /myfiles/folderB that start with orders - so orders.json, orders2.json, orders3.json:

Connector settings
Our SFTP shape is configured as follows:
Use the SFTP GET user pass endpoint.
Start looking for the file to rename from the root, which is defined as: /myfiles/folderB/

Payload & SFTP server
When the process flow is run, the payload for the SFTP shape shows the content retrieved from matched files. We have three payloads:
On our remote server, all 'orders' files have been removed from /myfiles/folderB/ and this directory is now empty:
Check the path for the file(s) to pluck, which is defined as: /^orders.*/i. This regular expression resolves to 'all files starting with 'orders', irrespective of upper/lower case.





list command - it should only be used to list all files in a given directory. If you need to work with a specific group of files, you may find the get command useful.When configuring an SFTP connector, two fields should be updated:
FTP command only: list
The root to the directory you want to list. Since there is no specific file to consider, the full path to the directory will be the root.
Not required.
Our process flow is configured as follows:
In this flow, we need to list all files in /myfiles/folderB on the remote server:
Connector settings
Our SFTP shape is configured as follows:
List all files in the root, which is defined as: /myfiles/folderB/
On our remote server, all files in /myfiles/folderB/ are loaded into the flow as a payload, with files in an array:




The rename command changes the name of specified files. You can rename single/multiple files and leave them in the same directory, or you can rename files into a different directory.
No content is loaded into the flow. A boolean value is returned to indicate a successful/failed file operation.
Regular expressions are supported in filenames.
When , three fields should be updated:
The Patchworks SFTP connector is used to work with data via files on SFTP servers in process flows. You might work purely in the SFTP environment (for example, copying/moving files between locations), or you might sync data from SFTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an SFTP server




Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different SFTP server location
This guide explains the basics of configuring a connection shape with an SFTP connector.
When you install the Patchworks SFTP connector from the Patchworks marketplace and then add an instance, you'll find that two authentication methods are available:
User pass
The instance is authenticated by providing a username and password for the SFTP server.
Key pass
The instance is authenticated by providing a private key (RSA .pem format) for the SFTP server.
Further information on these authentication methods can be found on our SFTP (prebuilt connector) page.
When you add a connection shape and select an SFTP connector, you will see that two endpoints are available:
Here:
SFTP GET UserPass is used to retrieve files from the given server (i.e. to receive data)
SFTP PUT UserPass is used to add/update files on the given server (i.e. to send data)
Having selected either of the two SFTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
These fields are summarised below:
FTP command
A valid FTP command is expected at the start of this field (e.g. get, put, rename, etc.). If required, qualifying path/filename information can follow a given command.
Root
This field is only needed if you are specifying a regular expression in the subsequent path field.
If you are NOT defining the path field as a regular expression, the root field isn't important - you can leave it set to /.
If you are ARE defining the path field as a regular expression, enter a root path that reflects the expected file location as closely as possible - this will optimise performance for expression matching.
For example, suppose the files that we want to process are in the following SFTP folder:
/orders/store/year/pending and that our specified path contains a regular expression to retrieve all files for store 1 for the current day in 2023. In this case our root would be defined as:
orders/store1/2023/pending
Path
If the name of the file that you want to target is static and known, enter the full path to it here - for example:
store1/2023/pending/20230728.json
When specifying a path to a given folder in this way, you don't need a / at the start or at the end.
If the name is variable and therefore unknown, you can specify a regular expression as the path. In this case, you enter the required regular expression here, and ensure that the root field contains a path to the relevant folder (see above).
Original filename
This field is not currently used. For information about working with original filenames please see the section below.
Original path
This field is not currently used. For information about working with original paths please see the section below.
Please refer to the Valid FTP commands page.
If you're processing files between SFTP server locations, the {{original_filename}} variable is used to reference filenames from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass endpoint.
This handles cases where you're taking action with files/data processed by a previous connection shape which is configured to use the SFTP GET UserPass endpoint and retrieve files matching a regular expression path.
In this scenario, we can't know the literal name of the file(s) that the SFTP PUT UserPass endpoint will receive. So, by setting the path field to {{original_filename}}, we can refer back to the filename(s) from the previous SFTP connection step.
The {{original_path}} variable is used to replicate the path from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass endpoint.
This handles cases where you're taking action with files/data processed by a previous connection shape which is configured to use the SFTP GET UserPass endpoint to retrieve files matching a regular expression path and you want to replicate the source path in the target location.
The {{current_path}} variable is used to reference the filename within the current SFTP connection step. It's particularly useful when moving files - for example:
get_and_move:store1/completed_orders/{{current_filename}}
A fairly common requirement is to create folders on an SFTP server which are named according to the current date. This can be achieved using a custom script, as summarised below.
The following four lines of code should be added to your script:
The path in your SFTP connection shape should be set to:
The data object in the script shape contains three items: payload, meta, and variables.
Our script code creates a timestamp, puts it in to the meta, and then puts the meta into the data.
The SFTP shape always checks if there is an original_filename key in the meta and if one exists, this is used.
Much of the information above focuses on scenarios where you are working with files between different SFTP locations. However, another approach is to take the data in files from an SFTP server and sync that data into another Patchworks connector.
When a process flow includes a source connection for an SFTP server (using the SFTP GET UserPass endpoint) and a non-SFTP target connector (for example, Shopify), data in the retrieved file(s) is used as the incoming payload for the target connector.
If multiple files are retrieved from the SFTP server (because the required path in settings for the SFTP connector is defined as a regular expression which matches more than one file), then each matched file is put through subsequent steps in the process flow one at a time, in turn. So, if you retrieve five files from the source SFTP connection, the process flow will run five times.
For information about working with regular expressions, please see the link below:
FTP command followed by the required directory and name for the file when it's renamed.
The common root to source and target directories.
The source directory and file(s) to be renamed.

Our process flow is configured as below:

In this flow, we need to rename a file named hello.txt in /myfiles/folderC to goodbye.txt:
We want the renamed file to end up in the same directory.

Connector settings
Our SFTP shape is configured as follows:
Use the SFTP GET user pass endpoint.
Start looking for the file to rename from the root, which is defined as: /myfiles/

Payload & SFTP server
When the process flow is run, the payload for our SFTP shape shows just a 1:
A 1 indicates a successful response from your remote server. A 0 indicates an unsuccessful response.
On our remote server, hello.txt is now named goodbye.txt (still in /folderC):

Our process flow is configured as follows:

In this flow, we need to rename /myfiles/folderE/hello.txt:
The file will be renamed as /myfiles/folderF/goodbye.txt - so we are both moving and renaming the file:

Connector settings for SFTP shape
Our first SFTP connector step is configured as follows:
Use the SFTP GET user pass endpoint.
Start looking for the file to rename from the root, which is defined as: /myfiles/

Payload for SFTP shape
When the process flow is run, the payload for our SFTP shape shows just a 1:
A 1 indicates a successful response from your remote server. A 0 indicates an unsuccessful response.

Remote server outcome
On our remote server, hello.txt is gone from /myfiles/folderE:
But now we have a file named, goodbye.txt in /myfiles/folderF:
$timestamp = round(microtime(true) * 100);
$meta = $data['meta'];
$meta['original_filename'] = 'FIXED_TEXT' . '_' . $timestamp . '.xml';
$data['meta'] = $meta;{{original_filename}}Check the path for the file to rename, which is defined as: folderC/hello.txt
Rename the file to the directory/name provided immediately after the ftp command. This is defined as: rename:folderC/goodbye.txt
Check the path for the file to rename, which is defined as: folderE/hello.txt
Rename the file to the directory/name provided immediately after the ftp command. This is defined as: rename:folderF/goodbye.txt












[0-9]* matches any sequence of digits
\. is an escaped full stop, so a literal full stop is used instead of the special meaning that a full stop has in regular expressions
json is a literal string to be matched
the / at the end of the string denotes the end of the regular expression
path2023Suppose our SFTP folder is:
store1/orders/2023/pending
...and that it contains a number of files which all start with 'orders' followed by a date:
Our path and root fields would be defined as below:
Here, the root field is set to the folder containing our required files and the path field contains a regular expression where:
the / at the start of the string denotes the start of the regular expression
the ^ means that the first part of the line (i.e. filename) must start with the literal string orders
[0-9]* matches any sequence of digits
\. is an escaped full stop, so a literal full stop is used instead of the special meaning that a full stop has in regular expressions
json is a literal string to be matched
the / at the end of the string denotes the end of the regular expression












When configuring an SFTP connector, three fields should be updated:
FTP command only: append
The root to the directory where the source file is located. You could enter just / and then a full path in the path field, or you can provide the full path as the root.
The source file (i.e. the file which contains content you want to append to the target file).
delete command.When configuring an SFTP connector, three fields should be updated:
FTP command only: delete
The root to the directory where the required file is located. You could enter just / and then a full path in the path field, or you can provide the full path to the file.
If you are deleting multiple files using a regular expression, enter the entire path here, so the path field only contains your regular expression.
The filename or regular expression (if you specified the full path as root) or the full path to the file.
Our process flow is configured as follows:
In this flow, we have three SFTP shapes, each with a different task:
SFTP shape 1
Retrieve the a list of fruit from a summer.txt file from /myfiles/folderA
SFTP shape 2
Append this content this to another list of fruit in a file named winter.txt, which is also stored in /myfiles/folderA:
SFTP shape 3
Get the full list of fruit from winter.txt and load it into the flow.
Connector settings for SFTP shape 1
Our first SFTP shape is configured to retrieve content to append, from /myfiles/folderA/summer.txt . Connector settings are defined using a get command, as follows:
Payload from SFTP shape 1
When the process flow is run, the payload for our first SFTP shape shows the content in /myfiles/folderA/summer.txt :
Connector settings for SFTP shape 2
Our second SFTP shape is configured to append the current payload to our target file on the remote server - /myfiles/folderA/winter.txt . Connector settings are defined as follows:
Payload from SFTP shape 2
When the process flow is run, the payload for our second SFTP shape shows just a 1:
A 1 indicates a successful response from your remote server. A 0 indicates an unsuccessful response from your remote server.
Connector settings for SFTP shape 3
Our final SFTP shape is configured to retrieve our target file from the remote server - /myfiles/folderA/winter.txt . Connector settings are defined using a get command, as follows:
Payload from SFTP shape 3
When the process flow is run, the payload for our final SFTP shape shows the content in /myfiles/folderA/winter.txt :
Here we can see that content from /myfiles/folderA/summer.txt has ben appended to original content in /myfiles/folderA/winter.txt.



















Our process flow is configured as follows:
In this flow, we need to delete all files that start with 'old' in /myfiles/folderB on the remote server:
Connector settings
Our SFTP shape is configured as follows:
Look for files from the root, which is defined as: /myfiles/folderB
Check the path for the file(s) to remove, defined as: /^old.*/i
Delete files with the ftp command. This is defined as: delete
On our remote server, all files starting with 'old' have been removed from /myfiles/folderB/:








The copy command copies a file from one location on the remote server to another location on the remote server. The copied file remains in the source directory and no content is loaded into the flow.
The copy command can be used to in one step. You can but this requires multiple steps rather than a single copy operation.
Regular expressions are supported when defining files to be copied with the copy command.
When copying , subfolders are not included.
When , three fields should be updated:
Typically, a process flow run is triggered and a request for data is made via a connector shape - if the request is successful, data is retrieved and the flow continues.
However, there may be scenarios where you need to control whether the connector shape or process flow run should fail or continue based on information returned from the connection request. To achieve this, you can apply a response script to your connector shape.
When a response script is applied to a connector shape, the script runs every time a connection is attempted. The script receives the response code, headers, and body from the request and - utilising response_code actions - returns a value determining whether the connector shape/flow run continues or stops.
Response scripts are just like any other custom script, except they receive additional information from the request - see lines 11 to 14 in the example below:
When a response script is applied, the existing schema/data path defined for the associated endpoint is bypassed. If data is modified by the script, it is returned in its modified state. If the script does not modify data, the data is returned in its original format. You should consider this in any subsequent shapes where the schema is used - for example: , , , .
If you use a response script on an endpoint that modifies data and you are reliant on that data to resolve variables (e.g. for pagination) you should ensure that such dependencies are not compromised by your modifications.
To implement a response script, you should:
Response scripts are written and deployed in the usual way, via the option. However, two additional options can be used for scripts that you intend to : and .
The response_code determines how the process flow behaves if a connection request fails. Supported response_code values are:
The message is optional. If supplied, it is output in the run logs.
To apply your response script, access settings for the required connector shape and select your script from the response script dropdown field.
Here we handle the scenario where a connection response appears OK because the status code received is 200, but in fact the response body includes a string (Invalid session) which contradicts this. So, when this string is found in the response body, we want to retry the process flow.
In this case, we return a response_code of 2 with an message of Invalid session:
Here we show how the payload received from a connection request is checked for an order number and an order status - retrying the process flow if a particular order status is found:
0
Continue
1
Fail the connector step and retry. The connector step is marked as failed and the queue will attempt it again.
2
Fail the process flow and queue it to retry. The process flow is marked as failed and queued for a retry.
3
Fail the process flow and do not retry.
4
Force the connector to re-authenticate and retry the request.

<?php
/**
* Handler function.
*
* @param array $data [
* 'payload' => (string|null) the payload as a string|null
* 'variables' => (array[string]string) any variables as key/value
* 'meta' => (array[string]string) any meta as key/value
* 'flow' => (array[mixed]) current flow data, including variables
* 'response' => [
* 'headers' => ['Content-Type' => 'application/json', .......],
* 'body' => '....',
* 'status' => 200
* ]
* ]
*
* @return array $data Structure as above, plus 'logs' => (array[string]) Logs to be written to flow run log after script execution
*/
function handle($data)
{
return $data;
}<?php
function handle($data)
{
// Stops the flow with a message
$data['response_code'] = 3;
$data['message'] = 'Flow stopped by response script';
return $data;
}<?php
/**
* Handler function.
*
* @param array $data [
* 'payload' => (string|null) the payload as a string|null
* 'variables' => (array[string]string) any variables as key/value
* 'meta' => (array[string]string) any meta as key/value
*. 'response' => [
* 'headers' => ['Content-Type' => 'application/json', .......],
*. 'body' => '....',
* 'status' => 200
* ]
* ]
*/
function handle($data)
{
handle invalid session error in PVX
$data['response']['status'] = 200;
$data['response']['body'] = 'Invalid session';
if (str_contains($data['response']['body'], 'Invalid session')) {
return [
'response_code' => 2 // retry flow
'message' => 'Invalid session'
];
}
<?php
/**
* Handler function.
*
* @param array $data [
* 'payload' => (string|null) the payload as a string|null
* 'variables' => (array[string]string) any variables as key/value
* 'meta' => (array[string]string) any meta as key/value
*. 'response' => [
* 'headers' => ['Content-Type' => 'application/json', .......],
*. 'body' => '....',
* 'status' => 200
* ]
* ]
*/
// check if order is ready to be processed yet
$data['payload'] = [
'order_id' => 1,
'status' => 'Pending',
]
if ($data['payload']['status'] === 'Pending') {
return [
'response_code' => 2, // retry flow
'message' => 'Order not ready for processing, adding flow back to queue'
]
}/^old.*/iFTP command followed by the target directory - i.e. where should the file(s) be copied?
The common root to source and target directories.
The source directory and files

Our process flow is configured as follows:

In this flow, we need to copy a file named orders.json from /myfiles/folderB on the remote server, to /myfiles/folderB , keeping the same filename:

Connector settings
Our SFTP shape is configured as follows:
Start looking for the file to get, from the root, which is defined as: /myfiles/
Check the path for the file to copy, which is defined as: folderB/orders.json

On our remote server, the file remains in /myfiles/folderB/:
And it can also be found in /myfiles/folderA/:

Our process flow is configured as below:

In this flow, we need to copy all files in /myfiles/folderC on the remote server:
...to /myfiles/folderD , which is currently empty:
We are keeping the same filenames.

Connector settings for SFTP shape 1
Our first SFTP connector step is configured as follows:
Use the SFTP GET user pass endpoint.
Use list as the FTP command.

Flow control settings
We use a flow control step to extract each file name into its own payload:
Here we create batches of 1, so we get one payload per file name.
When the flow runs, this shape outputs multiple payloads, each containing a single filename. For example:

Connector settings for SFTP shape 2
Our final SFTP connector step is configured as follows:
Since we are updating the remote server (as opposed to retrieving files) the SFTP PUT user pass endpoint is selected.
The root is defined as /myfiles/

On our remote server, all files remain in /myfiles/folderC/:
And they can also be found in /myfiles/folderD/:

Copy this file to the path (from the root) which is specified immediately after the ftp command. This is defined as: copy:folderA/{{current_filename}}
Look for files in the root, which is defined as: /myfiles/folderC
Since there's no specific file to target, we leave the path empty
When the flow runs, the SFTP shape outputs a single payload which contains all file names found in /myfiles/folderC, as an array:
folderCfolderDThe path defines which files are copied. It's set as folderC/[[payload.0]] which means: look in folderC (from the root) for a filename resolved from the first value in the incoming payload. Keep in mind that this step repeats for each incoming payload from the previous flow control step - i.e. for each file.
The ftp command includes the copy command, immediately followed by our target directory: copy:folderD/{{current_filename}} .















The get command retrieves file content and loads it into the flow as a payload. If multiple files are returned, multiple payloads are generated.
Regular expressions are supported when defining files to retrieve with the get command.
When , two fields should be updated:
FTP command only: get
The root to the directory you want to list. Since there is no specific file to consider, the full path to the directory will be the root.
Not required.

Our process flow is configured as follows:

In this flow, we need to list all files in /myfiles/folderB on the remote server:

Connector settings
Our SFTP shape is configured as follows:
List all files in the root, which is defined as: /myfiles/folderB/

On our remote server, all files in /myfiles/folderB/ are loaded into the flow as a payload, with files in an array:



