A cutting-edge iPaaS platform requires a robust, versatile infrastructure that scales with its customers. The Patchworks infrastructure is built on Kubernetes, a technology that has revolutionised how we deploy, manage, and scale our applications:
Flexible auto-scaling is a significant advantage for Patchworks users - it means you don't pay for a predetermined capacity that might only be required during peak periods, such as Black Friday.
Our flexible, auto-scaling architecture gives peace of mind by allowing you to start on your preferred plan, with the ability to exceed soft limits as needed. If you require more resources, you can transition to a higher tier seamlessly, or manage overages with ease.
Auto-scaling adjusts computing resources dynamically, based on demand - ensuring efficient, cost-effective resource management that's always aligned with real-time demand. The auto-scaling process breaks down into four stages:
At Patchworks, every process flow shape has its own microservice and its own Kubernetes pod(s). The diagram below shows how this works:
Metrics for Kubernetes pods are scraped from Horizon using Prometheus. These metrics are queried by KEDA and - when the given threshold is reached - auto-scaling takes place. This process is shown below:
Prometheus JSON exporter scrapes Horizon metrics for each Core microservice count.
Prometheus scrapes metrics from the JSON exporter.
KEDA queries Prometheus, checking if any Core microservice has reached the process threshold (set to 8).
If the process threshold is reached, KEDA scales the Core microservice pod.
The Kubernetes cluster auto-scaler monitors pods and decides when a node needs to be added. A node is added if a pod needs to be scheduled and there aren't sufficient resources to fulfill the request. This process is shown below:
The Kubernetes scheduler reads the resource request for a pod and decides if there are enough resources on an existing node. If yes, the pod is assigned to the node. If no, the pod is set to a pending
state and cannot start.
The Kubernetes auto-scaler detects that a pod cannot schedule due to a lack of resources.
The Kubernetes auto-scaler adds a new node to the cluster node pool - at which point, the Kubernetes scheduler detects the new node and schedules the pod on the new node.
With customers worldwide relying on Patchworks to sync data between numerous systems, we understand just how vital data security and integrity are throughout our operations. Patchworks is committed to implementing and promoting Information Security best practices at every level of our organisation.
Following rigorous audits by an accredited certification body, we are delighted to see this reflected in our certification for compliance with ISO/IEC 27001:2022.
ISO/IEC 27001:2022 is the most recent update of the international standard for managing information security.
Published by the International Organisation for Standardisation (ISO), it provides a framework for establishing, implementing, maintaining, and continuously improving an Information Security Management System (ISMS). This defines how Patchworks manages security in a holistic, comprehensive manner.
To confirm our certification, scan the QR code above, or check the link below!
Leveraging a combination of proven technologies and innovative solutions, our tech stack is curated to provide a comprehensive, flexible environment for developing, deploying, and managing our products.
Our user interface combines the power of PrimeVue for feature-rich UI components, Tailwind for styling, and Vue.js for building a progressive and interactive user experience.
Laravel is a PHP framework known for its elegant syntax and robust features. Combined with Nuxt - an open source framework based on Vue.js, Nitro, and Vite - we have a solid foundation for server-side rendering and seamless navigation.
We leverage the agility and scalability of Amazon Web Services (AWS) for cloud infrastructure, Vercel for seamless deployment and hosting, Kubernetes for container orchestration, and Argo for managing and automating workflows.
Our development process utilises TypeScript for type safety, PHPUnit for comprehensive testing, NPM for efficient package management, and Docker for containerisation.
We use MariaDB and MySQL for relational database management, Elasticsearch for powerful search and analytics, and Redis for high-performance caching and data storage.
The Patchworks infrastructure is designed for resilience and scalability - utilising cutting-edge technologies and best practices to ensure your data flows securely, efficiently, and reliably.
Leveraging a combination of proven technologies and innovative solutions, our tech stack is curated to provide a comprehensive, flexible environment for developing, deploying, and managing our Core product.
In this section
Patchworks NordLayer VPN
89.47.62.54
AWS Production K8s Cluster
18.168.241.46
18.168.94.149
13.41.170.82
Microservices are used to build the Patchworks platform - small, independent services that communicate with each other, allowing for flexibility, scalability, and easier maintenance.
API first is key for powerful integrations. Our next-generation dashboard is driven by powerful APIs which means we can integrate with any other API simply and seamlessly.
Cloud-native development facilitates our microservice architecture, Kubernetes deployments, DevOps infrastructure as code, and much more!
Headless is exactly what you'd expect for an API-first platform. The Patchworks backend is built with our own API, which is then consumed by the dashboard for general use.
Real-time logs (via web sockets) can be viewed while a process flow runs, with visibility of request, response and payload information at every step.
Logs are retained for one month for retrospective problem-solving.
Webhooks, events, and inbound API requests can all be tracked through the Patchworks Dashboard - you don’t need to be an engineer to figure out when/where execution errors occur.
We manage all API updates for our library of prebuilt connectors.
We take care of all supported authentication mechanisms (OAuth, token, API key, etc.).
We've removed all the complexity when it comes to building and maintaining new integrations.
Our multi-tenant architecture means that customers have the benefits of shared software and infrastructure, secure in the knowledge that each customer's data is isolated and invisible to other tenants.
Multi-tenancy provides a much faster solution, since we only search one set of data rather than everything - all database operations and secret storage is per-tenant.
Multi-tenancy allows flexibility for change - if necessary we can 'lift and shift' a tenant to a new database, or to a faster region, or even to a completely different cloud provider in a different continent!
Infrastructure updates are made via IaC (Infrastructure as Code).
Infrastructure updates are peer-reviewed and authorised before being merged.
All production code flows through development and staging review cycles before release to production.
Every release must pass both automated and hands-on testing by our QA team.
Product penetration testing is performed annually by an external, CREST-accredited organisation.
Our Kubernetes nodes live in private subnets.
All key ingresses are IP whitelisted.
We adopt a 'least privilege' model for our development team, and also for users of our AWS and Kubernetes infrastructure.
All key business systems must be accessed via a VPN
All staff use LastPass to generate and store strong passwords - 2FA access to LastPass is mandated.
Access is managed via role-based permissions, so only authorised users can access integrations and data for their company profile(s).
Audit logs provide a complete history of all user account activity, including Patchworks users.
Users always control their own passwords - password resets are never performed on behalf of other users.
Single sign-on via Google is supported.
The system continuously monitors traffic and resource usage (CPU, memory).
When usage exceeds predefined thresholds, the auto-scaler triggers.
Additional resources/pods are deployed to handle the increased load
When demand drops, resources are reduced to optimise costs.
We use AWS RDS for all critical databases. Our databases have full redundancy with one ‘read’ and one ‘write’ copy of each.
Each database copy is hosted in a separate availability zone so, in the unlikely event of a failure in one zone, we can fall back to the other.
Kubernetes pod and node auto-scaling Ensure that integrations run consistently, even in the busiest times. See our page for more information.