Newsletter not working


I get the test email but not the newsletter, no entries in ‘tabBulk Email’ table.

The issue seems is allocated near to

  if getattr(frappe.local, "is_ajax", False):
    # to avoid request timed out!
    # hack! event="bulk_long" to queue in longjob queue
    erpnext.tasks.send_newsletter.delay(,, event="bulk_long")

erpnext.tasks.send_newsletter.delay(... this line not work (no emails sent) but if I force the else: self.send_bulk() works well.

Seems like your async worker is blocked. It might show up in a bit.

Hi rmehta,

I restart supervisor:

frappe:frappe-web RUNNING pid 7842, uptime 0:20:45 frappe:frappe-worker RUNNING pid 7841, uptime 0:20:45 frappe:frappe-workerbeat RUNNING pid 7849, uptime 0:20:45 node-socketio RUNNING pid 7676, uptime 0:32:49 redis-async-broker RUNNING pid 7675, uptime 0:32:49 redis-cache RUNNING pid 7677, uptime 0:32:49

I looked at the logs and no errors.

No locks files on sites.

Do yo know how to debug the line erpnext.tasks.send_newsletter.delay(,, event="bulk_long"), I can confirm that 'send_newsletter 'in never run and this line return ‘false’ in result.failed()

Where I can look to find the problem?

@Raul_Viveros I get personally vexed by celery too. Its not so transparent.

CC @pdvyas

Can you check the logs? frappe-bench/logs/workers.*.log. It should put a line if a task was successful or a traceback if there was an exception.

Hi pdvyas,

No errors and no success in worker.error.log, worker.log, workerbeat.error.log and workerbeat.log

I research more:

I delete (FUSHALL) all keys in redis server and I restart bench, and I have noticed that the keys ‘celery-task-meta…’ continues to increase, I guess that this is normal (nobody use erpnext meanwhile)

When I send newsletter a new key in redis appears ‘’ ( is one of my sites where I send the newsletter), I decode the body and this is (‘Test6’ is the name/subject of newsletter):

{"expires": null, "utc": true, "args": ["", "Test6"], "chord": null, "callbacks": null, "errbacks": null, "taskset": null, "id": "dd74a077-4391-4a21-b16d-b2e9194e4d44", "retries": 0, "task": "erpnext.tasks.send_newsletter", "timelimit": [null, null], "eta": null, "kwargs": {"event": "bulk_long"}}

‘retries’ never increases, this make me think that erpnext/frappe works fine but celery never run task but I don’t know the reason.

I go to continue researching, if you have some indication will be welcome.

Thank you so much for your help

I thought that this issue was only in my deploy but I test a full installation in a digitalocean droplet Ubuntu 14.04 64bits(like my deploy) and after run install bench script, configure outgoing acount email, test send ok with ‘send test’ from newsletter then I send the newsletter to a list with subcribers and nothing happens.

Is ‘send newsletter’ the only async task in frappe/erpnext?

Anyone else happens the same?

@Raul_Viveros Check if “Enable Scheduler” or equivalent is checked in your System Settings

@rmehta Yes “Enable Scheduled Jobs” is checked in Setup>System>System Settings

@rmehta, @pdvyas I’ve gotten it to work!, but passing empty string to “event” parameter instead of “bulk_long” :
erpnext.tasks.send_newsletter.delay(,, event="")

What can we expect this? Can be this valid?

The problem is in this loop:
The only worker in this loop no start with LONGJOBS_PREFIX (neither with ASYNC_TASKS_PREFIX).
Supervisor only run one worker and his name is by default celery@hostname, this is the only worker that return the line 44.
Following the flow of the code result that only the queues with the exact same name that sites are dispatched, only short jobs, no long jobs (neither async tasks).

This I resolve adding one worker more (other will be necesary for async tasks) in supervisor.conf:

command=/home/citysound/frappe-bench/env/bin/python -m frappe.celery_app worker -n longjobs@%%h

-n option is for the name of the worker (

I don’t know if this is a global issue or only it occurs in environments like mine.


@Raul_Viveros Thanks for debugging this!

CC @pdvyas, @anand

You’re welcome, I enjoyed like a baby :smile:

1 Like

@Raul_Viveros Thanks! We knew there was an issue lurking somwhere, thanks for catching :smile: I guess its fixed now.