The Patchworks FTP connector is used to work with data via files on FTP servers in process flows. You might work purely in the FTP environment (for example, copying/moving files between locations), or you might sync data from FTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an FTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different FTP server location
This guide explains the basics of configuring a connection shape with an FTP connector.
When you add a connection shape and select an FTP connector, you will see that two endpoints are available:
Here:
FTP GET
is used to retrieve files from the given server (i.e. to receive data)
FTP PUT
is used to add/update files on the given server (i.e. to send data)
Having selected either of the two FTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
For information about these fields please see our Configuring SFTP connections page - details are the same.
The connector shape is used to define which connector instance should be used for sending or receiving data, and then which endpoint.
All connectors have associated endpoints which determine what entity (orders, products, customers, etc.) is being targeted.
Any connector instances that have been added for your installed connectors are available to associate with a connector shape. Any endpoints configured for the underlying connector will be available for selection once you've confirmed which instance you're using.
If you need more information about the relationship between connectors and instances, please see our Connectors & instances page.
When you add a connector shape to a process flow, the connector settings panel is displayed immediately, so you can choose which of your connector instances to use, and which endpoint.
To view/update the settings for an existing connector shape, click the associated 'cog' icon to access the settings panel - for example:
Follow the steps below to configure a connector shape.
Step 1 Click the select a source integration field and choose the instance that you want to use - for example:
All connector instances configured for your company are available for selection. Connectors and their associated instances are added via the manage connectors page.
Step 2 Select the endpoint that you want to use - for example
All endpoints associated with the parent connector for this instance are available for selection.
Step 3 Depending on how your selected endpoint is configured, you may be required to provide values for one or more variables.
Step 4 Save your changes.
Step 5 Once your selected instance and endpoint settings are saved, go back to edit settings:
Now you can access any optional filter options that are available - for example:
Available filters and variables - and whether or not they are mandatory - will vary, depending on how the connector is configured.
Step 6 The request timeout setting allows you to override the default number of seconds allowed before a request to this endpoint is deemed to have failed - for example:
The default setting is taken from the underlying connector endpoint setup and should only be changed if you have a technical reason for doing so, or if you receive a timeout error when processing a particularly large payload.
Step 7 Set error-handling options as required. Available options are summarised below:
Retries
Sets the number of retries that will be attempted if a connection can't be made. You can define a value between 0
and 2
. The default setting is 1
.
Backoff
If you're experiencing connection issues due to rate limiting, it can be useful to increase the backoff
value. This sets a delay (in seconds) before a retry is attempted.
You can define a value between 1
and 10
seconds. The default setting is 1
.
Allow unsuccessful statuses
If you want the process flow to continue even if the connection response is unsuccessful, toggle this option on
. The default setting is off
.
Step 8 Set the payload wrapping option as appropriate for the data received from the previous step:
This setting determines how the payload that gets pushed should be handled. Available options are summarised below:
Raw
Push the payload exactly as it is pulled - no modifications are made.
First
This setting handles cases where your destination system won't process array objects, but your source system sends everything (even single records) as an array. So, [{customer_1}]
is pushed as {customer_1}
.
When multiple records are pulled, they are written to the payload as an array. If you then strip out a single record to be pushed, that single record will - typically - still be wrapped in an array. Most systems will not accept single records as an array, so we need to 'unwrap' our customer record before it gets pushed.
Wrapped
This setting handles cases where your destination system is expecting a payload to be wrapped in an array, but your payload contains a series of 'unwrapped' objects.
The most likely scenario for this is where you have a complex process flow which is assembling a payload from different routes.
Setting payload wrapping to wrapped will wrap the entire payload as an array object. So, {customer_1},{customer_2}{customer_3}
is pushed as [{customer_1},{customer_2}{customer_3}]
.
Step 9 If required you can set response handling options:
These options are summarised below:
Save response AS payload
Set this option to on
to save the response from the completed operation as a payload for subsequent processing.
POST
PUT
PATCH
DELETE
Save response IN payload
Set this option to on
to save the response from the completed operation IN the payload for subsequent processing.
GET POST
PUT
PATCH
DELETE
Expect an empty response
Set this option to on
if you are happy for the process flow to continue if no response is received from this this request.
POST GET
Step 10 Save your changes.
The Patchworks SFTP connector is used to work with data via files on SFTP servers in process flows. You might work purely in the SFTP environment (for example, copying/moving files between locations), or you might sync data from SFTP files into other connectors, or you might use a combination of both! For example, a process flow might be designed to:
Pull files from an SFTP server
Use the data in those files as the payload for subsequent processing (e.g. sync to Shopify)
Move files to a different SFTP server location
This guide explains the basics of configuring a connection shape with an SFTP connector.
Guidance on this page is for SFTP connections however, they also apply for FTP.
When you the Patchworks SFTP connector from the and then , you'll find that two authentication methods are available:
When you add a connection shape and select an SFTP connector, you will see that two endpoints are available:
Here:
SFTP GET UserPass
is used to retrieve files from the given server (i.e. to receive data)
SFTP PUT UserPass
is used to add/update files on the given server (i.e. to send data)
You may notice that the PUT UserPass
endpoint has a GET
HTTP method - that's because it's not actually used for SFTP. All we're actually doing here is retrieving host information from the connector instance - you'll set the FTP action later in the endpoint configuration, via an ftp command
settings.
Having selected either of the two SFTP endpoints, configuration options are displayed. The same options are used for both endpoints but in a different sequence, reflecting usage:
These fields are summarised below:
In this scenario, we can't know the literal name of the file(s) that the SFTP PUT UserPass
endpoint will receive. So, by setting the path
field to {{original_filename}}
, we can refer back to the filename(s) from the previous SFTP connection step.
The {{original_path}}
variable is used to replicate the path from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
The {{current_path}}
variable is used to reference the filename within the current SFTP connection step.
For example, you might want to move existing files to a different SFTP folder. The rename
FTP command is an efficient way to do this - for example:
Here, we're using the FTP rename
command to effectively move files - we're renaming with a different folder location, with current filenames:
rename:store1/completed_orders/{{current_filename}}
The following four lines of code should be added to your script:
Our example is PHP - you should change as needed for your preferred language.
The path in your SFTP connection shape should be set to:
Much of the information above focuses on scenarios where you are working with files between different SFTP locations. However, another approach is to take the data in files from an SFTP server and sync that data into another Patchworks connector.
When a process flow includes a source connection for an SFTP server (using the SFTP GET UserPass
endpoint) and a non-SFTP target connector (for example, Shopify), data in the retrieved file(s) is used as the incoming payload for the target connector.
If multiple files are retrieved from the SFTP server (because the required path in settings for the SFTP connector is defined as a regular expression which matches more than one file), then each matched file is put through subsequent steps in the process flow one at a time, in turn. So, if you retrieve five files from the source SFTP connection, the process flow will run five times.
For information about working with regular expressions, please see the link below:
Generally, if your process flow is pulling from a source connection but later pushing just a into a destination connection, you should set payload wrapping to first.
This option provides the ability to access the response body via . This can be useful for cases where an API returns a successful response despite an error - by inspecting response information from the payload itself, you can determine whether or not a request is successful.
By default, the response is saved in a field named response
- for example:
However, when the save response in payload
option is toggled on
, you can specify your preferred field name - for example:
Further information on these authentication methods can be found on our page.
If you're processing files between SFTP server locations, the {{original_filename}}
variable is used to reference filenames from a previous SFTP connection step. It's most typically used with the SFTP PUT UserPass
endpoint.
This handles cases where you're taking action with files/data processed by a previous which is configured to use the SFTP GET UserPass
endpoint and retrieve files matching a regular expression path
.
This handles cases where you're taking action with files/data processed by a previous which is configured to use the SFTP GET UserPass
endpoint to retrieve files matching a regular expression path
and you want to replicate the source path in the target location.
A fairly common requirement is to create folders on an SFTP server which are named according to the current date. This can be achieved using a , as summarised below.
The data
object in the contains three items: payload
, meta
, and variables
.
Our creates a timestamp, puts it in to the meta
, and then puts the meta
into the data
.
The SFTP shape always checks if there is an original_filename
key in the meta
and if one exists, this is used.
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a pending
folder:
The regular expression is explained below:
The sample process flow below shows two connection shapes that are configured with SFTP endpoints - the first is to get
files and the second is to put
files:
Our aim is to copy files retrieved from an FTP location in the first connection step, to a second FTP location, using the same folder structure as the source.
If we look at settings for the first SFTP connection, we can see that it's configured to get
files matching a regular expression, in a store1
folder:
The path is added as a regular expression, explained below:
FTP command
A valid FTP command is expected at the start of this field (e.g. get, put, rename, etc.). If required, qualifying path/filename information can follow a given command. For example:
rename:/orders/store1/processed/{{current_filename}}
Root
This field is only needed if you are specifying a regular expression in the subsequent path
field.
If you are NOT defining the path
field as a regular expression, the root
field isn't important - you can leave it set to /
.
If you are ARE defining the path
field as a regular expression, enter a root path that reflects the expected file location as closely as possible - this will optimise performance for expression matching.
For example, suppose the files that we want to process are in the following SFTP folder:
/orders/store/year/pending
and that our specified path
contains a regular expression to retrieve all files for store 1
for the current day in 2023. In this case our root
would be defined as:
orders/store1/2023/pending
In this way, any regular expression to match for the path
will start in the relevant (2023
)folder - rather than checking folders and subfolders for all stores and all years.
Path
If the name of the file that you want to target is static and known, enter the full path to it here - for example:
store1/2023/pending/20230728.json
If the name is variable and therefore unknown, you can specify a regular expression as the path
. In this case, you enter the required regular expression here, and ensure that the root
field contains a path to the relevant folder (see above).
Original filename
This field is not currently used. For information about working with original filenames please see the Using an {{original_filename}} variable section below.
Original path
This field is not currently used. For information about working with original paths please see the Using an {{original_path}} variable section below.
User pass
The instance is authenticated by providing a username and password for the SFTP server.
Key pass
The instance is authenticated by providing a private key (RSA .pem
format) for the SFTP server.
Response scripts are just like any other custom script, except they receive additional information from the request - see lines 11 to 14 in the example below:
To implement a response script, you should:
Response scripts are written and deployed in the usual way, via the custom scripts option. However, two additional options can be used for scripts that you intend to apply via connector shapes: response_code and message.
These options are only valid when the script is applied to a connector step as a response script
.
The response_code
determines how the process flow behaves if a connection request fails. Supported response_code values are:
0
Continue
1
Fail the connector step and retry. The connector step is marked as failed and the queue will attempt it again.
2
Fail the process flow and queue it to retry. The process flow is marked as failed and queued for a retry.
3
Fail the process flow and do not retry.
The message
is optional. If supplied, it is output in the run logs.
To apply your response script, access settings for the required connector shape and select your script from the response script dropdown field.
Here we handle the scenario where a connection response appears OK because the status
code received is 200
, but in fact the response body
includes a string (Invalid session
) which contradicts this. So, when this string is found in the response body, we want to retry the process flow.
In this case we return a response_code
of 2
with an message
of Invalid session
:
Here we show how the payload
received from a connection request is checked for an order number and an order status - retrying the the process flow if a particular order status is found:
When specifying a path to a given folder in this way, you don't need a /
at the start or at the end.