Error: frappe.model.utils.link_count.update_link_count exceeded maximum timeout (300 seconds)

Been receiving the following error log every hour:

{'retry': 0, 'log': <function log at 0x7f4da108e1b8>, 'site': u'site1.local', 'event': u'hourly', 'method_name': 
u'frappe.model.utils.link_count.update_link_count', 'method': <function update_link_count at 0x7f4da1952578>, 
'user': u'Administrator', 'kwargs': {}, 'async': True, 'job_name': u'frappe.model.utils.link_count.update_link_count'}
Traceback (most recent call last):
  File "/home/frappe/frappe-bench/apps/frappe/frappe/utils/", line 95, in execute_job
  File "/home/frappe/frappe-bench/apps/frappe/frappe/model/utils/", line 49, in update_link_count
    raise e
JobTimeoutException: Job exceeded maximum timeout value (300 seconds)

Anyone know what this is and how to fix it? Thanks.

Hi @DBoobis,

can you check if there are any items in your job queue?

Also, look at any other hourly hooks and compare against changes that happened before this started. Any more insights will help…

Only entries in the Background Jobs are frappe.utils.global_search.sync_global_search

The only changes made to hooks were doc_events, all on_submit or on_update

The only hourly hooks I can see are erpnext.accounts.doctype.subscription.subscription.make_subscription_entry and We haven’t set up the email accounts yet, might that have something to do with it?

Anyone know anything about this? What is it trying to do here? Every hour this happens and the queue gets stuck behind it.

update_link_count is a routine database update frappe/ at develop · frappe/frappe · GitHub

So a timeout may mean your background jobs seem unable to keep up?

Possibly physical resources are not enough for loads?

To diagnose say ERPNext running slow

Nothing in the slow query log and we’re running with 6 cores and 12GB of RAM. From the github link it looks like this job is supposed to run on demand, not on a schedule, and the error logs suggest it’s constantly retrying a failed query. Is there any way to tell it to stop retrying?

Certainly the question is why you want to do that -

For a list of the cron jobs in ‘bench console’ run this

For more insight this is instructive frappe/ at develop · frappe/frappe · GitHub

You may have hit some sort of tipping point?

So what to increase for a trial period to see what helps:

time out interval
Dropbox backup failed - Job exceeded maximum timeout value - #4 by UmaG

number of workers ERPNext Slow with multiple POS Users - #2 by felix

Also confirm rules of thumb like innodb_buffer_pool_size and other insight from the authority!

also try running:

bench doctor

I’ve increased the scheduler_interval to 600, hopefully that will work.

I’m the only user at the moment and I’m not doing anything most of the time. innodb_buffer_pool_size is 9GB, everything else is correctly configured for the resources allocated. Bench doctor said 3 workers online and none jobs, it doesn’t appear to have done anything though.

I don’t have this issue on my dev environment which is a VM on my laptop (4GB RAM, 1 processor) so I really don’t think it’s a problem with overloading the system.