I seem to be encountering some issues with memory on my ERPNext server. Below is the output of the ‘free’ utility over a 20 minute period (about 5 min intervals)
total used free shared buff/cache available
Mem: 7982 4770 1600 8 1611 3105
Swap: 0 0 0
ubuntu@ip-172-31-1-158:~$ free -m
total used free shared buff/cache available
Mem: 7982 4971 1592 8 1418 2907
Swap: 0 0 0
ubuntu@ip-172-31-1-158:~$ free -m
total used free shared buff/cache available
Mem: 7982 3515 4383 8 82 4364
Swap: 0 0 0
ubuntu@ip-172-31-1-158:~$ free -m
total used free shared buff/cache available
Mem: 7982 7885 45 8 50 7
Swap: 0 0 0
As shown, the server practically ran out of 8 Gb memory during a period when only 1 user (me) is accessing any of the sites! It just seems to keep fluctuating significantly. I would really appreciate some guidance here
Wale this redis howto explains how to conduct a benchmark analysis to profile memory usage
If you could profile, benchmark and dig into your issue - to identify what you are doing and the result - that would help guide others to theorize and formulate ideas, thanks.
Thanks a lot for your feedback. Redis actually seems to have gotten better after I upgraded Mariadb to 10.1.x
The main issue now is erpnext which seems to be gobbling up memory whenever a list or form is left open on any of the sites. Please see output of top below:
Upon further investigation, it appears that erpnext is eating up ram regardless of what’s going on in the sites. It just continues to take up more and more ram till the system kills the process!
This is anecdotal, and I will try to get some hard facts. Yesterday I attempted to upgrade to version 8. It was a fresh install on Ubuntu 16.04 fully up to date, with a v6 DB imported and migrated.
I saw unusual memory usage, and pages were hanging/timing out in places. I ended up having to roll back.
If I have time today, I’ll try glean some specific facts. For what it is worth, there seems to be some instability in the erpnext stack at the moment.
Hi Wale,
I’m assuming you are on a production environment.
I’ve also recently been having the same issues.
Can you run this for ports on redis_queue.conf and redis_cache.conf files to monitor and give a feedback
good to hear from you. There are several different logs in the frappe-bench/logs directory. Which of them would hold the required info? Same goes for /var/log/