Load Balancing Strategy for Production Frappe Deployment

Hello everyone,

I’m deploying Frappe Framework in production on AWS and I could not find clear documentation about recommended load balancing strategies.

My scenario:

  • Production environment

  • Hosted on AWS (EC2)

  • Nginx + Supervisor (standard bench production setup)

  • Redis (cache + queue)

  • MariaDB

  • Background workers enabled

  • Separate FastAPI microservice integrated via RabbitMQ

  • Multiple clients and increasing workload

Questions:

  1. What is the recommended approach for horizontal scaling in Frappe?

  2. Is it safe to run multiple bench instances behind a load balancer?

  3. How should Redis and Socket.IO be handled in a multi-instance environment?

  4. What is the recommended setup for high availability?

  5. Is Docker/Kubernetes officially recommended for scaling?

I couldn’t find detailed guidance in the official documentation about load balancing and HA setups.

Any real-world architecture examples would be highly appreciated.

Thanks!

I don’t have answers to all your questions. But I can say a few things from experience:

  • Moving the MariaDB database to a separate host is a relatively simple + helpful win. This yields immediate performance gains, because MariaDB uses lots of memory, CPU, and disk I/O.

    Just don’t relocate MariaDB too far away, in terms of network latency. The framework and MariaDB are very chatty with each other. You want them to share a low-latency TCP network. Otherwise separating them can actually make performance worse, instead of better.

  • I don’t think it’s “unsafe” to run multiple bench instances behind a load balancer. Not sure your use case. But cannot think of any real harm in that. I’ve run many different instances of bench, with varying Versions, all coexisting nicely on one host. No problems there.

  • Docker/Kubernetes is definitely what (most) people here will recommend to you. Even for single-instance and single-tenant, it’s the recommended way to install nowadays (last I heard, anyway)

@weldonet

This might help with point 5

Hi Brian — thanks for sharing your experience, this is very helpful.

I completely agree that moving MariaDB to a dedicated host is one of the quickest wins. In my case I’m planning to keep the DB within the same low-latency VPC subnet (likely via RDS) precisely because, as you mentioned, Frappe is quite chatty with the database.

Regarding multiple bench instances behind a load balancer — good to hear you’ve run this successfully. My use case is a multi-tenant production environment with growing concurrency, so my main concern is ensuring everything remains properly stateless (sessions in Redis, shared file storage, and consistent Socket.IO behavior across nodes). If you’ve seen any pitfalls specifically around realtime or file handling in multi-node setups, I’d be very interested to hear.

On Docker/Kubernetes: I see many recommendations in that direction as well. For now I’m evaluating whether to first scale horizontally with traditional EC2 + ALB and only later move to containers once operational complexity justifies it.

Appreciate you taking the time to respond — very useful input.

1 Like

Thanks @asieftejani — this is helpful, especially the Traefik-based setup.

From what I understand, this example is mainly focused on consolidating multiple benches on a single host and scaling vertically, which makes sense for certain scenarios. My current goal is to prepare for horizontal scaling across multiple nodes on AWS (ALB in front, shared Redis, external MariaDB, etc.), so I’m trying to map what additional components are required beyond the single-server pattern.

In particular, I’m evaluating:

  • stateless bench nodes behind a load balancer

  • shared file storage strategy

  • Socket.IO behavior across multiple instances

  • HA considerations for Redis and workers

If you (or others) have seen a recommended reference architecture for true multi-node production setups with Frappe, I’d really appreciate any pointers.

Thanks again for sharing the link — very useful context.