There are two ways to access your company insights page:
The quickest way to access your company insights page is via the account summary link in the left-hand navigation bar:
Notice that you can also see a quick preview of your CPU and data usage for the current month:
Your company insights page can be accessed in settings - select the settings option (from the bottom of the left-hand navigation bar), then choose company insights:
The company insights page is designed to help you understand how your process flows are performing, and resource usage. At a glance you can see which process flows are most heavy on resources, drilling down to performance for connectors and shapes that are included within those process flows:
Company insights data is loaded at 1 a.m. every morning. This means that data for the current day will not be fully populated until 1 a.m the following day.
There are five key areas to consider on the company insights page:
The snapshot panel provides summary metrics for the overall performance of your company's process flows, connectors used in process flows, and shapes used in process flows, for the selected month.
These metrics are summarised below:
The colour of usage gauges changes to reflect how close your usage is to the associated allowance.
The data usage value shown here is the same as displayed in the left-hand navigation bar. If you notice a difference occasionally, this is most likely due to caching - values in the navigation bar are cached but values on the main insights page are not. If this happens, you should work from values shown on the insights page (caching will resolve itself in due course).
You can view insights for process flows, connectors used in process flows, and shapes used in process flows (for the selected month) - click the required selector tab for whichever of these you want to view:
Changing the selection here updates all metrics and analysis for the new data type.
The combination chart shows aggregated CPU usage (line) and payload size (bars) by day, for the selected month:
You can interact with this chart in several ways:
Beneath the combination chart, you'll find a breakdown of items (process flows, connectors, or shapes) included in summary metrics.
When process flow data is selected, you'll see a list of process flows that have been run within the selected month:
For each entry you can see the process flow name, CPU time used, the number of times used (i.e. run); data usage, operations usage, and score. If your overall score is low, this is a great place to pinpoint particular process flows with a low score and may benefit from a review.
To view more details for an entry, click the associated 'view' icon - here you'll see a breakdown for each version of the process flow that has been run (in the selected month):
When connector data is selected, you'll see a list of connectors that have been used within the selected month:
For each entry you can see the connector name, CPU time used, the number of times used; data usage, operations usage, and score. If your overall score is low, this is a great place to pinpoint particular connectors with a low score and may benefit from a review.
To view more details for an entry, click the associated 'view' icon - here you'll see a breakdown of the specific connector endpoints and instances that were used (in the selected month):
When shape data is selected, you'll see a list of shapes that have been used in process flows, within the selected month:
For each entry you can see the shape name, CPU time used, the number of times used; data usage, operation usage, and score. If your overall score is low, this is a great place to pinpoint a particular shape with a low score and may benefit from a review.
For script shapes, click the associated 'view' icon - here you'll see a breakdown for all scripts (and versions) that were used in the selected month:
If you manage multiple linked companies (using our partner features bolt-on), you can choose to view aggregated insights for all your managed companies:
Having selected this option, you'll see a snapshot panel with aggregated totals for all of your linked/managed companies:
If you want to see full company insights for a managed company, you should switch to that company and access company insights in the usual way.
Item | Summary |
---|---|
Metric | Summary |
---|---|
Data usage
The aggregated size of all payloads that leave each shape in a process flow - these are known as payloads out.
Operation usage
The total number of operations completed. For more information about how operations are calculated please see About operations.
Score
This is a measure of how efficient/expensive (in terms of processing) your process flows are, based on the amount of data processed per second - 999 is the highest score.
Your score is based on all runs for all process flows associated with your company profile. This includes flows that are:
Triggered by a schedule, webhook, or event
Initialised by an API request
Run manually
Enabled or disabled when run
Draft or deployed status when run
If your score is on the low side, it may be that your process flows are necessarily complex - including items such as scripts, transformations, flow control, caches, etc., will have an impact on your score.
However, if a score is low, it's always worth checking further as there may be places where your process flows can be optimised (for example, does a flow include lots of mapping transformations that could be achieved in a single script?).
By choosing to view insights for shape data, it's sometimes very obvious which shapes are 'expensive' and making the biggest contribution to your lower score. However, if your score is on the low side and you're satisfied that the process flow is built optimally, don't worry too much about the score - it's just there as an indicator.
Please see our Best practice for building process flows section for advice on building efficient process flows.
1
2
Parent company selector. If you manage multiple linked companies (using our partner features bolt-on), you'll see insights for your own (parent) company profile by default. You can also use this dropdown field to view aggregated insights for all your linked companies. This option is only displayed if the partner features bolt-on is enabled for your company profile.
3
Performance snapshot panel. At a glance, view your aggregated resource usage and score. These numbers are for the month and data type currently selected.
4
Data selectors. Choose the type of data be analysed - process flows, connectors, or shapes.
5
Combination chart. A visual representation of CPU and data usage by day, for the month and data type currently selected.
6
Data breakdown. A breakdown of each process flow, connector, or shape (depending which data type is selected) that's included in aggregated totals for the selected month.
Hover your cursor over an option to bring the corresponding data into focus:
Click an option to toggle the corresponding data on/off:
Hover your cursor over any data point to view summary information:
Company insights can help you understand how your process flows are performing, and your resource usage.
With the ability to view details for all process flows that run in a given month, and drill down to the performance of individual shapes, this is a powerful tool to help you identify any areas that could be optimised to ensure maximum efficiency.
Introduction
Data usage is calculated by aggregating the size of payloads that leave each shape in a process flow - these are known as payloads out.
In any process flow, data is received and passed from one shape to the next. Different shapes handle their incoming payload(s) in different ways - in some cases, data simply passes through (data in is the same as data out) but in others, data is manipulated and changed.
Understanding how these payloads are aggregated provides a clearer picture of your overall data usage.
A payload is the data generated/processed during the execution of any shapes within a flow. We refer to the payload out when it leaves that shape. This could be via any of the mechanisms below:
Mechanism | Summary |
---|
To calculate data usage, all payload out sizes (from each shape in a process flow) are aggregated. Here's how this works:
Typically, the size of a payload that goes into a process flow shape is the same as the payload out size - payloads are not modified unless your flow includes actions that are designed to do this.
The table below summarises process flow shapes and their ability to change the size of incoming payloads:
The examples below show how data usage can be affected by different process flow shapes.
Month selector. Choose a year and month to analyse - all subsequent data displayed is for the selected year/month:
Shape | Change size? | Notes |
---|
No | The payload out is always the same as the incoming payload. |
No | The payload out is always the same as the incoming payload. |
Yes | When receiving data, the payload out will reflect the volume of data received from this connection request. When sending data, the payload out will be the same as the incoming payload UNLESS you choose to |
No | If set to If set to A de-dupe shape will never increase the size of the payload out. |
Yes | If a filter removes data from an incoming payload, the payload out will be smaller than the incoming payload. A filter will never increase the size of the payload out. |
No | The incoming payload is batched into multiple, smaller batches but the aggregate size of the payload out for those batches is always equal to the incoming payload size. |
Yes | The payload out will reflect the volume of data loaded from the cache. |
Yes |
No | The payload out is always the same as the incoming payload. |
Yes | When an incoming payload hits a route shape, your defined conditions are checked and a payload out is created for each defined route. If your route conditions exclude items in the incoming payload from progressing down any routes then the aggregate size of payloads out will be smaller than the incoming payload. |
Yes | If you configure this shape with a manual payload then the payload out is likely to differ from any incoming payload. If no manual payload is specified then the payload out is always the same as the incoming payload. |
Yes | A custom script might increase or decrease the size of the payload out. |
No | The payload out is always the same as the incoming payload. |
Yes | The incoming data is split at a defined data element, so only that element progresses to the next step - i.e. the payload out is likely to be smaller than the incoming payload. |
No | The incoming payload simply passes through for tracking - the payload out will always be the same as the incoming payload. A track data shape will never increase the size of the payload out. |
Patchworks API call | Data is received from or sent to an API call. |
Connector shape | Data is received from or sent to a Patchworks connector. |
Other shapes | Data is processed within any shape - for example, by a custom script (script shape or transform function), mapping payloads from one structure to another (map shape), or routing payloads down multiple branches (route shape). |
File transfers | Data moves between systems - for example, CSV files or image files. |
Database queries | Data is fetched from or inserted into a database. |
Triggers | Data is sent/received via an event, webhook or Patchworks API call. |
A straightforward like-for-like mapping between two systems will not affect the size of the payload out. However, if are applied the size of the payload out may change slightly - for example, prefix
, suffix
, format date
and script
transforms.
Typically, any size variations from mapping transformations are small.
In Patchworks, an operation is counted whenever a request is made to send or receive a payload to/from an endpoint.
Crucially, we're not concerned with the number of items in the payload - we simply count the number of times a process flow requests to send or receive a payload. This might happen in several ways:
Mechanism | Summary |
---|---|
The number of payloads that a process flow sends or receives correlates to the number of operations logged. In the most straightforward case, you might create a process flow that always receives a single, unpaginated payload from one system and then sends a single, unpaginated payload to another system - this would be an operations count of 2.
However, as the complexity of process flows increases, so does the possibility that the number of payloads can increase during a process flow run. The most likely ways that this can happen are:
Paginated data. If you receive paginated data, you receive 1 payload for each page of data - so each page represents 1 receive operation. In short, an initial data pull can result in multiple receive operations. And if you receive multiple pages, it follows that multiple pages continue through the flow - which means (potentially) multiple pages will be sent into your destination system, resulting in multiple send operations.
Flow control. The flow control shape is typically used to batch an incoming payload into multiple, smaller payloads for onward processing. So, even if you start by receiving 1 payload, it's likely that you will be sending multiple payloads at the end of the flow.
The examples below show the impact that these scenarios can have on operation counts:
Connector shape
A request is made (successfully or otherwise) to receive data from a specified endpoint.
Connector shape
A request is made (successfully or otherwise) to send data to a specified endpoint.
Webhook trigger
A webhook
is received (with or without a payload) in the first step of the process flow.
Event trigger
An event
is received (with or without a payload) in the first step of the process flow.
Patchworks API call
A Patchworks API call
is received (with or without a payload) in the first step of the process flow.
(1) Identify interactions Every time a shape in your process flow performs a manipulation, moves data or sends/receives data between systems, an interaction occurs. This could be an API call, a file upload, or any other data transfer/manipulation.
(2) Measure payload sizes For each interaction, the size of the payload out is measured in megabytes.
Only the actual payload is considered - metadata, headers, and other non-payload data are NOT considered when calculating the payload out size.
(3) Aggregate payload sizes All payload sizes are aggregated to calculate the total data usage.
In the simplest of flows, a connector shape receives a 1MB payload, so the first payload out is 1MB.
The map shape receives this as its incoming payload. There are no field transformations so the data doesn't change - the second payload out is 1MB.
The final connector shape receives this as its incoming payload to be sent into the associated system. We have NOT set the save response as payload option to on
, so the payload out is 1MB.
The aggregate total for the payload out size for all three shapes is 3MB.
Here, a connector shape receives a 1MB payload, so the first payload out is 1MB.
The de-dupe shape receives this as its incoming payload and filters out all duplicate records, reducing the payload size. The next payload out is 0.75MB.
The map shape receives this as its incoming payload. There are no field transformations so the data doesn't change - the next payload out is 0.75MB. The flow control shape receives this as its incoming payload and batches it into 10 payloads for onward processing. The next payload out is 10 x 75K.
The second connector receives all 10 payloads to be sent into the associated system. We have NOT set the save response as payload option to on
, so the payload out is 10 x 75K. Finally, all 10 payloads pass through the de-dupe shape for tracking only. The payload out is 10 x 75K.
The aggregate total for the payload out size for all shapes is 4.75MB.
Here, a connector shape receives a 1MB payload, so the first payload out is 1MB.
This payload is added to a cache, and the next payload out is 1MB. This payload is passed to a route shape with conditions that send half the payload down one route and half down the other - resulting in 2 x 0.5MB payloads out.
For route 1, the first 0.5MB payload passes through a track data shape and the payload out is 0.5MB. The map shape receives this as its incoming payload and there are no field transformations so the data doesn't change - the payload out is 0.5MB. The final connector receives this as its incoming payload to be sent into the associated system. We have NOT set the save response as payload option to on
, so the payload out is 0.5MB.
For route 2, the first 0.5MB payload is received by the map shape - there are no field transformations so the data doesn't change - the payload out is 0.5MB. The final connector receives this as its incoming payload to be sent into the associated system. We have NOT set the save response as payload option to on
, so the payload out is 0.5MB.
The aggregate total for the payload out size for all shapes is 5.5MB.
Here, a connector shape receives a 1MB payload, so the first payload out is 1MB.
This payload is added to a cache, and the next payload out is 1MB. Then we load cache data from an existing company cache which is 50MB, so the next payload out is 50MB.
The script shape receives this as its incoming payload and runs - it doesn't do anything that affects the payload size so the next payload out is 50MB.
The map shape receives this as its incoming payload. There are no field transformations so the data doesn't change - the next payload out is 50MB.
The final connector receives this as its incoming payload to be sent into the associated system. We have NOT set the save response as payload option to on
, so the payload out is 50MB.
The aggregate total for the payload out size for all shapes is 202MB.
In the simplest of flows you might receive 1 payload from a source endpoint which contains 350 records in a single, unpaginated payload. The flow goes on to send that payload to a destination endpoint.
This results in 2 operations - 1 for the receive operation and 1 for the send operation.
In a slightly more complex flow you might receive 1 payload from a source endpoint which contains 350 records in a single, unpaginated payload.
This results in 6 operations - one for the receive operation and 5 for the send operations.
In this example our incoming data is paginated as 50 records per page, so we receive 350 records as 7 payloads (50 records in each).
The flow goes on to send all of these payloads to a destination endpoint.
This results 14 operations - 7 for receive operations and 7 for send operations.
In this example our incoming data is paginated as 50 records per page, so we receive 350 records as 7 payloads (50 records in each).
This results 357 operations - 7 for receive operations and 350 for send operations.
The flow continues with a shape which batches this data into smaller chunks, resulting in 5 payloads, each containing 70 records. The flow goes on to send all of these payloads to a destination endpoint.
The flow continues with a shape which batches this data into single-item payloads. The flow goes on to send all of these payloads to a destination endpoint.