Building frappe assets…
<— Last few GCs —>
[28372:0x29d6070] 12820 ms: Scavenge 276.9 (303.7) → 270.5 (309.7) MB, 44.4 / 0.9 ms allocation failure
[28372:0x29d6070] 13006 ms: Scavenge 282.1 (309.7) → 275.0 (314.7) MB, 43.8 / 1.1 ms allocation failure
[28372:0x29d6070] 19874 ms: Scavenge 287.7 (314.7) → 281.0 (317.7) MB, 747.0 / 242.6 ms allocation failure
[28372:0x29d6070] 26141 ms: Scavenge 291.3 (318.2) → 284.8 (325.2) MB, 118.5 / 4.2 ms allocation failure
<— JS stacktrace —>
Cannot get stack trace in GC.
FATAL ERROR: NewSpace::Rebalance Allocation failed - process out of memory
1: node::Abort() [/usr/bin/node]
2: 0x8cbf4c [/usr/bin/node]
3: v8::Utils::ReportOOMFailure(char const*, bool) [/usr/bin/node]
4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]
5: 0xa63b6b [/usr/bin/node]
6: v8::internal::MarkCompactCollector::Evacuate() [/usr/bin/node]
7: v8::internal::MarkCompactCollector::CollectGarbage() [/usr/bin/node]
8: v8::internal::Heap::MarkCompact() [/usr/bin/node]
9: v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/bin/node]
10: v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/bin/node]
11: v8::internal::NewCode(v8::internal::CodeDesc const&, unsigned int, v8::internal::Handlev8::internal::Object, bool, int) [/usr/bin/node]
12: v8::internal::CodeGenerator::MakeCodeEpilogue(v8::internal::TurboAssembler*, v8::internal::EhFrameWriter*, v8::internal::CompilationInfo*, v8::internal::Handlev8::internal::Object) [/usr/bin/node]
13: v8::internal::compiler::CodeGenerator::FinalizeCode() [/usr/bin/node]
14: v8::internal::compiler::PipelineImpl::FinalizeCode() [/usr/bin/node]
15: v8::internal::compiler::PipelineCompilationJob::FinalizeJobImpl() [/usr/bin/node]
16: v8::internal::Compiler::FinalizeCompilationJob(v8::internal::CompilationJob*) [/usr/bin/node]
17: v8::internal::OptimizingCompileDispatcher::InstallOptimizedFunctions() [/usr/bin/node]
18: v8::internal::StackGuard::HandleInterrupts() [/usr/bin/node]
19: v8::internal::Runtime_StackGuard(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]
20: 0x157db1d042fd
Killed
error Command failed with exit code 137.
info Visit yarn run | Yarn for documentation about this command.
1 Like
Getting this same “error Command failed with exit code 137” also here.
We have tried to stop all processes, make a bench start at the same time with bench update (in another tab as advised elsewhere in the forum)… but nothing so far helped.
The erpnext is not visible in the web browser at all.
This happened apparently at the same time than the server had updated something and rebooted.
what is peculiar that if we type this:
systemctl status mariadb
we get this as an answer:
mariadb.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
but if we log in using
mysql -u root -p
we do get in there.
Looks like a memory error (you also have a redis post which is showing errors possibly linked to a memory problem).
Answer for this is:
“No journal files were opened due to insufficient permissions.”
Ok, changed to root user, I get lot of lines like this:
May 21 10:53:50 localhost kernel: Out of memory: Kill process 5839 (node) score 430 or sacrifice child
May 21 10:53:50 localhost kernel: oom_reaper: reaped process 5839 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
May 21 10:59:54 localhost kernel: node invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
May 21 10:59:54 localhost kernel: oom_kill_process+0x212/0x430
May 21 10:59:54 localhost kernel: out_of_memory+0x109/0x4b0
May 21 10:59:54 localhost kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
May 21 10:59:54 localhost kernel: Out of memory: Kill process 6980 (node) score 506 or sacrifice child
May 21 10:59:54 localhost kernel: oom_reaper: reaped process 6980 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
May 21 11:24:50 localhost kernel: python invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
May 21 11:24:50 localhost kernel: oom_kill_process+0x212/0x430
May 21 11:24:50 localhost kernel: out_of_memory+0x109/0x4b0
May 21 11:24:50 localhost kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
May 21 11:24:50 localhost kernel: Out of memory: Kill process 22514 (node) score 444 or sacrifice child
May 21 11:24:50 localhost kernel: oom_reaper: reaped process 22514 (node), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
add a sudo to the command
sudo journalctl -k | grep -i -e memory -e oom
Definitely a memory issue.
@adam26d suggested
“You want to increase your server memory. Try increasing swap to 2 or 4GB” in that post - this would probably help.
If it happened straight after an update, then maybe one of the processes wasn’t released cleanly. If that persists, you’ll need to increase the RAM - but first try things like rebooting the server to start it cleaner
Any idea why this happens after months of using without an issue? (and exactly after a server update) And what is the best solution? To add RAM to the VPS itself or what?
We’ve done probably 10 reboots meanwhile trying to solve this issue today. So, maybe that’s not a solution, of course I can make it once more… and clean the bench cache etc if that helps.
Can you share what the hardware/spec/OS is that you’re running.
Maybe some memory stats using
free -mh
echo "AvailMem:"$(free -h | awk '/Mem:/ {print $7}'); #shows "Available memory"
df -h
total used free shared buff/cache available
Mem: 970M 515M 120M 4.6M 335M 294M
Swap: 511M 2.3M 509M
AvailMem:294M
Filesystem Size Used Avail Use% Mounted on
/dev/root 25G 5.7G 18G 25% /
devtmpfs 483M 0 483M 0% /dev
tmpfs 486M 0 486M 0% /dev/shm
tmpfs 486M 6.7M 479M 2% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 486M 0 486M 0% /sys/fs/cgroup
tmpfs 98M 0 98M 0% /run/user/0
And the OS: Ubuntu 16.04 LTS
I would try the increase of swap space first that adam26d recommended, and then if that doesn’t work, increase the RAM to 2GB
Ok, since we already have swap, and the digital ocean link says you can do it with that guide if you have none, need to find out how to increase already an existing one. This swap was originally actually set up when purchasing the new VPS.
(we are actually in Linode, not DO)
=> The maximum SIZE for Swap Image in Linode is “512 MB”. So, it is already in maximum.
By the way, when trying to make “bench start” it also says this just few lines before the issues starts:
(node:2827) Warning: Possible EventEmitter memory leak detected. 11 change listeners added. Use emitter.setMaxListeners() to increase limit
Is this something that is a “bug” or malfunction from ERPNext side or from what?
Ok, now we have upgrade the whole server with 2GB RAM. After which there was still an error with “bench start”. So, we made “bench build” and that ran successfully and after that we stopped all services with “…” and then made again “bench start”.
Now this “bench start” is going ahead, way more than before but it looks like repeating lines that start like this:
13:20:38 worker_short.1 | 13:20:38 short: frappe.utils.background_jobs.execute_job(event=u’all’, is_async=True, job_name=u’frappe.email.queue.flush’, kwargs={}, method=u’frappe.email.queue.flush’, site=u’mysite.local’, user=u’Administer’) (c89049ae-ea1e-4c1c-b531-f0531da209b4)
…and ends
13:20:39 worker_short.1 | 13:20:39 short: frappe.utils.background_jobs.execute_job(event=u’all’, is_async=True, job_name=u’pull_from_email_account|My Website’, kwargs={‘email_account’: u’My Website’}, method=<function pull_from_email_account at 0x7fad1a5d3cf8>, site=u’mysite.local’, user=u’Administrator’) (f9090b67-b554-4089-a0c4-08f00d02ffcf)
… and these are repeating in the cycles of waiting for 4 minutes and then posting the same on the screen. Let’s see if we just patiently wait if anything else happens.
It just keeps going on. “Bench start” process has never used this much time!
Between the above cycle of lines it comes often this type of lines:
13:48:41 web.1 | 127.0.0.1 - - [21/May/2019 13:48:41] "GET /robots.txt HTTP/1.0" 200 -
13:48:41 web.1 | INFO:werkzeug:127.0.0.1 - - [21/May/2019 13:48:41] "GET /robots.txt HTTP/1.0" 200 -
13:48:44 worker_short.1 | 13:48:44 short: Job OK (c4d2623c-59a3-47b9-83ee-58e0b16eb70e)
13:48:44 worker_short.1 | 13:48:44 Result is kept for 500 seconds
13:48:45 web.1 | 127.0.0.1 - - [21/May/2019 13:48:45] "GET /tuotteet/necklaces/leather/tiger-eye-lxph1 HTTP/1.0" 200 -
13:48:45 web.1 | INFO:werkzeug:127.0.0.1 - - [21/May/2019 13:48:45] "GET /tuotteet/necklaces/leather/tiger-eye-lxph1 HTTP/1.0" 200 -
…and then the cycle of the "worker_short.1 " lines repeat again after each 4 minutes.
If you are running production mode, then you don’t need to do a “bench start”
Yes, it is an alive website in production mode. But how do I get the ERP / website visible again?
And can I just force-quit this bench start process without damaging anything?
You can just open a browser to http://localhost:8000 to access the erpnext site
Oh, true it is visible with the IP address:8000 but not anymore with the IP address only or with the domain name for the website. Oh dear… what killed that?
You may have a firewall which is blocking the port 8000.
Also check that the /etc/hosts file has the right content
1 Like