How to debug this system background job error?

What would you do to track down this failing background job?

Traceback (most recent call last):
  File "apps/frappe/frappe/utils/", line 144, in execute_job
  File "apps/frappe/frappe/email/doctype/email_account/", line 971, in pull_from_email_account
  File "apps/frappe/frappe/email/doctype/email_account/", line 352, in receive
    emails = email_server.get_messages()
  File "apps/frappe/frappe/email/", line 142, in get_messages
    if not self.connect():
  File "apps/frappe/frappe/email/", line 74, in connect
    return self.connect_imap()
  File "apps/frappe/frappe/email/", line 82, in connect_imap
    self.imap = Timed_IMAP4_SSL(
  File "apps/frappe/frappe/email/", line 625, in __init__
    self._super.__init__(self, *args, **kwargs)
  File "/usr/lib/python3.10/", line 1323, in __init__
    IMAP4.__init__(self, host, port, timeout)
  File "/usr/lib/python3.10/", line 202, in __init__, port, timeout)
  File "/usr/lib/python3.10/", line 1336, in open, host, port, timeout)
  File "/usr/lib/python3.10/", line 312, in open
    self.sock = self._create_socket(timeout)
  File "/usr/lib/python3.10/", line 1326, in _create_socket
    sock = IMAP4._create_socket(self, timeout)
  File "/usr/lib/python3.10/", line 302, in _create_socket
    return socket.create_connection(address)
  File "/usr/lib/python3.10/", line 824, in create_connection
    for res in getaddrinfo(host, port, 0, SOCK_STREAM):
  File "/usr/lib/python3.10/", line 955, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Temporary failure in name resolution

Hi ,

Is it possible to test the email settings with a different email client for that service? Or try telnet?

Hey @smino nice to hear from you again!!

The site has one email domain and one email account. Mail sends and receives correctly.

I haven’t ever looked into this part of ERPNext so I’m kind of stabbing in the dark.

Some sort of background process is trying to send emails but presumably not respecting the default mailer settings. My guess, anyway.

My question is really about examining built-in background jobs. Where are they?

Does the email queue list yield any clues?

There’s Background jobs with auto refresh , will that monitor what’s happening?

It’s completely clean. All emails sent correctly.

The Background Jobs page shows jobs appearing and disappearing. When I select “Show Failed Jobs” it shows me a long long page of repetitions of the same error as above.

It does show a slightly deeper stack, however:

Traceback (most recent call last):
  File "/home/admin/frappe-bench-ERPLS/env/lib/python3.10/site-packages/rq/", line 1013, in perform_job
    rv = job.perform()
  File "/home/admin/frappe-bench-ERPLS/env/lib/python3.10/site-packages/rq/", line 709, in perform
    self._result = self._execute()
  File "/home/admin/frappe-bench-ERPLS/env/lib/python3.10/site-packages/rq/", line 732, in _execute
    result = self.func(*self.args, **self.kwargs)
  File "/home/admin/frappe-bench-ERPLS/apps/frappe/frappe/utils/", line 144, in execute_job
  File "/home/admin/frappe-bench-ERPLS/apps/frappe/frappe/email/doctype/email_account/", line 971, in pull_from_email_account

Later today, I’ll see if I can put a print statement in /home/admin/frappe-bench-ERPLS/env/lib/python3.10/site-packages/rq/ that would give me some sort of job specification.

From your original screenshot, that looks like a DNS error.

When ERPNext enqueues Background Jobs, it stores them in Redis. Specifically, the Redis database bound to TCP port 11000.

I believe Ankush was making an improved UI for v14. In lieu of that, if you want to see what’s happening in there, I suggest downloading a GUI tool.

My favorite is “Another Redis Desktop Manager”. But Redis Commander is okay too.

For better or worse, I’m now intimately familiar with this component. I had to learn all about it when I wrote my alternative Background Tasks Unleashed scheduler. So if you want a deeper dive into this, just let me know, Martin.

Hmmmm. Burdensome.

Since we are heading towards a 13 → 14 migration. I think I’ll just let it keep on failing.

In theory the new system will have no carry overs from the old one except the database.

Certainly, so far, there’s no such error appearing in my offline V14 experimentation.