Enqueued job failing. Am i correct is this out of memory issue
Work-horse terminated unexpectedly; waitpid returned 9 (signal 9);
In enqueue job I am insert data in milvus
embed_model = BGEM3EmbeddingFunction(use_fp16=False, device=“cpu”)
This happens to me whenever enqueued jobs takes longer than 20-30 minutes. The environment has plenty of memory. But the Python RQ workers are unable to use that memory.
This is not a Frappe Framework bug. It is either:
- a problem with Python RQ.
- a problem with Linux and Python RQ.
Here are some GitHub issues that talk about this:
Suggestions:
- If possible, optimize your enqueued code for speed and memory.
- Break your code apart into multiple, smaller jobs. Then run those jobs in sequence or in parallel, as appropriate.
If none of this ^ works, then just create a cron job in Linux that runs 'bench execute <your code>'
. This will succeed where RQ fails.
1 Like