Custom scripting technical overview

Introduction

This page provides some technical background for those interested in how the Patchworks custom scripting feature is implemented.

Need to know

  • The maximum memory size for a custom script is 512MB

  • The maximum size of a custom script is 4GB

KubeFaas overview

Patchworks custom scripting code is packaged into containers and runs as serverless functions.

This is achieved using our proprietary Kubefaas (Functions as a Service) project, which provides language-agnostic script deployments using lightweight templates in a given language - and packaging into Docker containers.

By utilising Docker and Kubernetes, we can provide consistent, repeatable execution of customer scripts, whilst managing the replicas to reduce waste.

Script creation & execution

The diagrams below provide a simplified illustration of what happens when a custom script is built and executed.

Script invocations

This process runs every time a script is called, ensuring that the script exists and that it has sufficient instances to handle the request:

  • If the script is not deployed, Core contacts the builder to build the script instance.

  • If the script is deployed but does not have sufficient instances, it communicates with the Kubernetes API to scale up the function.

The process is illustrated below:

Here:

  1. Send a request to the Kubefaas Controller (proxy).

  2. (optional) If the script is not deployed, build it.

  3. (optional) Deploy the new script with result from step 2 and then return to step 1.

  4. Check how many instances of the script are running.

  5. (optional) Scale up script instances to 1 - wait for readiness.

Script builds

This process runs when a script is built. Script templates are designed to maximise Docker layer caching and provide fast builds.

The most recent builds (14 days) are cached, providing a near-instant build (thus reducing idle time in flows). The process is illustrated below:

Here:

  1. Check if the function exists in the cache - respond with an image if it already exists.

  2. Build a new Docker image.

  3. Store the image in an S3 cache (with 14-day retention).

  4. Respond with the image URL.

Last updated