Hi, have an instance of 4 core Intel Xeon, 8 GB RAM server. It is running only one site with one retail outlet. Total invoices as of now is 192,721. Majority (about 97%) is from POS. On an average there are 4-5 pos invoices to be synced every 5 minutes. One store with one POS only. I have tried everything to bring down RAM usage. Moved db to another VPS thinking that previous VPS has faulty hardware.
Sometimes cant even update without rebooting as that is the only time the RAM goes down. Once server is running POS it goes back high.
Hope you have already tried things mentioned in the following link, this has usually helped me in the past (mainly using Slow Query Log and %prun) to identify bottlenecks and try to tune them.
Generally, the size of the swap file depends on how much RAM your system has:
Systems with less than 2 GB RAM - 2 times the amount of RAM.
Systems with 2 to 8 GB RAM - the same size as the amount of RAM.
Systems with more than 8 GB RAM - at least 4 GB of Swap.
Well I did increase the swap to match the RAM. And then restarted mysql. When I did the business had closed for the night. However, I still cant understand why mysql is taking 18.9% of memory. I also disabled Perpetual Inventory to be on safe side.
Guess will find out tomorrow. Thank you for the help though. For those interested in how to increase the swap here the method. You have to delete existing one and create a new swap
Started with sudo swapoff /swapfile
Got error swapoff: /swapfile: swapoff failed: Cannot allocate memory
Tried sudo dd if=/dev/zero of=/swapfile bs=1M count=8192
Got error dd: failed to open '/swapfile': Text file busy
Fixed it by sudo sysctl -w vm.overcommit_memory=1
I think we may have found the culprit. Its the backup that occurs on set frequency. Here are the steps we did to test it.
Restarted mysql so the memory was about 1.3 GIB.
Ran backup command (it was not --with-files)
Tracked memory usage. It moved to 4.5 while backup was being made.
Backup creation finished but the memory remained at 4.5. It did not go down. Left the system to run without any user logged in.
Ran backup again. Memory moved up to 6.
Made more back up memory moved up to 7.
Left system to run without any user logged in. After few hours the memory was stuck at 7GB.
Without any transaction occurring/task etc happening memory is stuck at 7 GB.
For unknown reason the memory is not being released after the job has been completed. Would like to know how we can have the memory released without stopping mysql.
HTOP after running backup command. About 6 minutes after running backup command. Nothing else happening on the server. Its a test server with no user except me logged in via SSH
Finally found it. The root cause is db migration from one VPS to another and --force restore . The RAM issue is only on sites whose production db I migrated from another VPS. I tested VPS which have original db. There have not been transferred from another VPS or local machine. On these many times I ran bench backup --with-files. RAM usage went up but came back after the job has been competed. It did not go as high as the VPS who inherited db from another VPS.
I also assumed Ubuntu 20 to be the culprit. To test theory I migrated db on a fresh Ubuntu 18 OS. After only restoring db and RAM 6 GB. On running sudo service mysql restart RAM was below 2 GB. I also checked Mariadb version on all server. They are all 10.XX.XX no issues here. I could not find why the migrated db is eating up RAM on backup. Anyone has clue then do let me know. There is no issue in setting.cnf as servers do have bit different settings but that does not have any impact on RAM usage of proper working servers.
To fix this I am thinking of having a mysql event scheduler that will restart db lets say at 7 AM daily. And also set backup only 1 time which I presume will happen only at 00:00 hours (12 AM). Cannot restart mysql while server is being used which is from 8 AM to 2 AM.
Cron job is also an alternative but I read db event is much better and cleaner. Any db expert who can assist me with making event scheduler? If not then a cron job to run restart mysql?
I do have the same issue. RAM and SWAP eating up after doing restore backup with files from old backup. I have upgraded the server to 13 with fresh installation.
More than a year we been running on version 12, there was no such issue in that. I have upgrade by bench update to stable 13 version. Took backup of all sites and reinstalled ubuntu 18.05 LTS to make RAID configuration.
Then installed version 13 and restored the backup. After that I experienced issue with erpnext performance. Found out RAM and SWAP full all the time. Firstly i thought it might be issue with not having enough RAM memory. After going through the blogs the same issue happened to @Muzzy and other.
What will be the root cause of it. If I do mysqld restart, after 6 mins RAM will be full.
Well I tried many stuff. Finally tried are these steps 1apt-get update 2apt-get install -y libjemalloc-dev 3 in /etc/systemd/system/mariadb.service.d/ edit any .conf in this directory by adding information at the end-line. LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1 4 run systemctl daemon-reload 5 run sudo systemctl restart mysql.service 6 run sudo reboot
I have found some respite on a demo server. Will be testing on production server.
Hey @Muzzy. Did you try on the production server? can you please share your experience as I am also facing the same kind of RAM and SWAP consumption issue?