What is the difference between background_workers and gunicorn_workers

I want to increase the number web workers for my site which one is the best to increase

I found it better to increase background workers. I don’t know why though - it just worked better. Increasing gunicorn things means you have to add non-standard sub-packages etc, which can cause other issues (especially after updates).


How Many Workers?
DO NOT scale the number of workers to the number of clients you expect to have. Gunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second.
Gunicorn relies on the operating system to provide all of the load balancing when handling requests. Generally we recommend (2 x $num_cores) + 1 as the number of workers to start off with. While not overly scientific, the formula is based on the assumption that for a given core, one worker will be reading or writing from the socket while the other worker is processing a request.
Obviously, your particular hardware and application are going to affect the optimal number of workers. Our recommendation is to start with the above guess and tune using TTIN and TTOU signals while the application is under load.
Always remember, there is such a thing as too many workers. After a point your worker processes will start thrashing system resources decreasing the throughput of the entire system.

How Many Threads?
Since Gunicorn 19, a threads option can be used to process requests in multiple threads. Using threads assumes use of the gthread worker. One benefit from threads is that requests can take longer than the worker timeout while notifying the master process that it is not frozen and should not be killed. Depending on the system, using multiple threads, multiple worker processes, or some mixture, may yield the best results. For example, CPython may not perform as well as Jython when using threads, as threading is implemented differently by each. Using threads instead of processes is a good way to reduce the memory footprint of Gunicorn, while still allowing for application upgrades using the reload signal, as the application code will be shared among workers but loaded only in the worker processes (unlike when using the preload setting, which loads the code in the master process).
Under Python 2.x, you need to install the ‘futures’ package to use this feature.

1 Like

is there any specific way to do this or do i just update it in the common_site_config.json and restart

1 Like

for workers…yes.

For gunicorn…you can edit the :~/frappe-bench/config/supervisor.conf

command=/home/frappe/frappe-bench/env/bin/gunicorn -b -w 4 -t 60 …

1 Like

great thanks for the help

Background workers : Frappe enqueues tasks that take long times to complete. All the jobs that are scheduled to run on a regular basis (hourly, daily, weekly) are also enqueued to prevent the main thread from having to process these. Background Job workers are the ones that handle these tasks.

Gunicorn Workers : This is what @trentmu is talking about.

You need to change the configuration in common_site_config.json, and then run bench setup supervisor and then do a bench restart

Don’t change the config directly, this will create inconsistencies between the frappe config and the supervisor config


I agree with @vjFaLk. Such changes are not wise!
They should only be done if you are desperate (and silly - like some of us :blush:)

I’ve made the mistake before, so now I prevent people from doing it :laughing:


@vjFaLk thank you for the help :grinning: