Full RAM usage on 4 core, 8 GB RAM server with only one retail outlet with one POS machine

Hi, have an instance of 4 core Intel Xeon, 8 GB RAM server. It is running only one site with one retail outlet. Total invoices as of now is 192,721. Majority (about 97%) is from POS. On an average there are 4-5 pos invoices to be synced every 5 minutes. One store with one POS only. I have tried everything to bring down RAM usage. Moved db to another VPS thinking that previous VPS has faulty hardware.

Sometimes cant even update without rebooting as that is the only time the RAM goes down. Once server is running POS it goes back high.

settings.cnf


# GENERAL #
user                           = mysql
default-storage-engine         = InnoDB
socket                         = /var/lib/mysql/mysql.sock
pid-file                       = /var/lib/mysql/mysql.pid

# MyISAM #
key-buffer-size                = 32M
myisam-recover                 = FORCE,BACKUP

# SAFETY #
max-allowed-packet             = 256M
max-connect-errors             = 1000000
innodb                         = FORCE

# DATA STORAGE #
datadir                        = /var/lib/mysql/

# BINARY LOGGING #
log-bin                        = /var/lib/mysql/mysql-bin
expire-logs-days               = 14
sync-binlog                    = 1

# REPLICATION #
server-id                      = 1

# CACHES AND LIMITS #
tmp-table-size                 = 32M
max-heap-table-size            = 32M
query-cache-type               = 0
query-cache-size               = 0
max-connections                = 500
thread-cache-size              = 50
open-files-limit               = 65535
table-definition-cache         = 4096
table-open-cache               = 10240

# INNODB #
innodb-flush-method            = O_DIRECT
innodb-log-files-in-group      = 2
innodb-log-file-size           = 512M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table          = 1
innodb-buffer-pool-size        = 5453M
innodb-file-format             = barracuda
innodb-large-prefix            = 1
collation-server               = utf8mb4_unicode_ci
character-set-server           = utf8mb4
character-set-client-handshake = FALSE
max_allowed_packet             = 256M

# LOGGING #
log-error                      = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes  = 0
slow-query-log                 = 1
slow-query-log-file            = /var/lib/mysql/mysql-slow.log

[mysql]
default-character-set = utf8mb4

[mysqldump]
max_allowed_packet=256M
# BEGIN ANSIBLE MANAGED BLOCK
# Import all .cnf files from configuration directory
!includedir /etc/mysql/mariadb.conf.d/
# END ANSIBLE MANAGED BLOCK

my.cnf infomration

#
max_connections		= 50
innodb_buffer_pool_size = 2G
tmp_table               = 8M
sort_buffer		= 4M
connect_timeout		= 5
wait_timeout		= 28800
max_allowed_packet	= 500M
thread_cache_size       = 128
sort_buffer_size	= 4M
bulk_insert_buffer_size	= 16M
tmp_table_size		= 64M
max_heap_table_size	= 64M
interactive_timeout = 30
performance_schema = ON
#
# * MyISAM
#
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched. On error, make copy and try a repair.
myisam_recover_options = BACKUP
key_buffer_size		= 128M
#open-files-limit	= 2000
table_open_cache	= 400
myisam_sort_buffer_size	= 512M
concurrent_insert	= 2
read_buffer_size	= 2M
read_rnd_buffer_size	= 1M
#
# * Query Cache Configuration
#
# Cache only tiny result sets, so we can fit more in the query cache.
query_cache_limit		= 2M
query_cache_size		= 256M
query_cache_type 		= 1
query_cache_strip_comments 	= 1
# for more write intensive setups, set to DEMAND or OFF
#query_cache_type		= DEMAND

background workers = 4
gunicorn workers = 4

Have tried many ways of fixing it without success. Would love to hear from sql experts on what more I can do.

I noted some mysql monitoring tolls in this thread. I might also try increasing the amount of swap available.

1 Like

@Muzzy - What versions are you running of erpnext and frappe?

Also, do you have “Perpetual Inventory” enabled?

Are you using native POS or POS Awesome?

BKM

Hope you have already tried things mentioned in the following link, this has usually helped me in the past (mainly using Slow Query Log and %prun) to identify bottlenecks and try to tune them.

@bkm V12 and yes perpetual inventory is on. It’s retail so we need to have it.

Thank you @Pawan will try out the method in the link.

The swap is not enough in this case

Generally, the size of the swap file depends on how much RAM your system has:
Systems with less than 2 GB RAM - 2 times the amount of RAM.
Systems with 2 to 8 GB RAM - the same size as the amount of RAM.
Systems with more than 8 GB RAM - at least 4 GB of Swap.

Well I did increase the swap to match the RAM. And then restarted mysql. When I did the business had closed for the night. However, I still cant understand why mysql is taking 18.9% of memory. I also disabled Perpetual Inventory to be on safe side.

Guess will find out tomorrow. Thank you for the help though. For those interested in how to increase the swap here the method. You have to delete existing one and create a new swap

Started with sudo swapoff /swapfile
Got error swapoff: /swapfile: swapoff failed: Cannot allocate memory
Tried sudo dd if=/dev/zero of=/swapfile bs=1M count=8192
Got error dd: failed to open '/swapfile': Text file busy
Fixed it by sudo sysctl -w vm.overcommit_memory=1

From here on it was fine.

sudo rm  /swapfile
sudo dd if=/dev/zero of=/swapfile bs=1M count=8192
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

8192 is 1024*8 (for 8 GB)

2 Likes

I think we may have found the culprit. Its the backup that occurs on set frequency. Here are the steps we did to test it.

  1. Restarted mysql so the memory was about 1.3 GIB.
  2. Ran backup command (it was not --with-files)
  3. Tracked memory usage. It moved to 4.5 while backup was being made.
  4. Backup creation finished but the memory remained at 4.5. It did not go down. Left the system to run without any user logged in.
  5. Ran backup again. Memory moved up to 6.
  6. Made more back up memory moved up to 7.
  7. Left system to run without any user logged in. After few hours the memory was stuck at 7GB.
    Without any transaction occurring/task etc happening memory is stuck at 7 GB.
    For unknown reason the memory is not being released after the job has been completed. Would like to know how we can have the memory released without stopping mysql.
1 Like

Check log files

Does top still show mysql using the memory? Also, did you say what the hosting environment is , the OS etc?

post output of cat /proc/meminfo

Hi.

MemTotal:        8152764 kB
MemFree:         4218608 kB
MemAvailable:    5251960 kB
Buffers:          744276 kB
Cached:           466828 kB
SwapCached:         7256 kB
Active:          3205804 kB
Inactive:         540024 kB
Active(anon):    2355448 kB
Inactive(anon):   179664 kB
Active(file):     850356 kB
Inactive(file):   360360 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       2097148 kB
SwapFree:        2034684 kB
Dirty:               428 kB
Writeback:             0 kB
AnonPages:       2532124 kB
Mapped:           133568 kB
Shmem:               392 kB
KReclaimable:      88192 kB
Slab:             149976 kB
SReclaimable:      88192 kB
SUnreclaim:        61784 kB
KernelStack:        3568 kB
PageTables:        10760 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     6173528 kB
Committed_AS:    8433776 kB
VmallocTotal:   34359738367 kB
VmallocUsed:        7700 kB
VmallocChunk:          0 kB
Percpu:             1968 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:      180076 kB
DirectMap2M:     6111232 kB
DirectMap1G:     4194304 kB

Its Ubuntu 20. ERPNext/Frappe V12. VPS Intel Xeon 4 core 8 GB RAM, 8 GB SWAP. 200GB SSD
HTOP after restart of mysql.

HTOP after running backup command. About 6 minutes after running backup command. Nothing else happening on the server. Its a test server with no user except me logged in via SSH

Finally found it. The root cause is db migration from one VPS to another and --force restore . The RAM issue is only on sites whose production db I migrated from another VPS. I tested VPS which have original db. There have not been transferred from another VPS or local machine. On these many times I ran bench backup --with-files. RAM usage went up but came back after the job has been competed. It did not go as high as the VPS who inherited db from another VPS.

I also assumed Ubuntu 20 to be the culprit. To test theory I migrated db on a fresh Ubuntu 18 OS. After only restoring db and RAM 6 GB. On running sudo service mysql restart RAM was below 2 GB. I also checked Mariadb version on all server. They are all 10.XX.XX no issues here. I could not find why the migrated db is eating up RAM on backup. Anyone has clue then do let me know. There is no issue in setting.cnf as servers do have bit different settings but that does not have any impact on RAM usage of proper working servers.

To fix this I am thinking of having a mysql event scheduler that will restart db lets say at 7 AM daily. And also set backup only 1 time which I presume will happen only at 00:00 hours (12 AM). Cannot restart mysql while server is being used which is from 8 AM to 2 AM.

Cron job is also an alternative but I read db event is much better and cleaner. Any db expert who can assist me with making event scheduler? If not then a cron job to run restart mysql?

Cheers

4 Likes

I’ve also experienced this before… I’m really interested in understanding the root cause and solution as well

Cheers

I do have the same issue. RAM and SWAP eating up after doing restore backup with files from old backup. I have upgraded the server to 13 with fresh installation.

More than a year we been running on version 12, there was no such issue in that. I have upgrade by bench update to stable 13 version. Took backup of all sites and reinstalled ubuntu 18.05 LTS to make RAID configuration.

Then installed version 13 and restored the backup. After that I experienced issue with erpnext performance. Found out RAM and SWAP full all the time. Firstly i thought it might be issue with not having enough RAM memory. After going through the blogs the same issue happened to @Muzzy and other.

What will be the root cause of it. If I do mysqld restart, after 6 mins RAM will be full.

Well I tried many stuff. Finally tried are these steps
1 apt-get update
2 apt-get install -y libjemalloc-dev
3 in /etc/systemd/system/mariadb.service.d/ edit any .conf in this directory by adding information at the end-line. LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
4 run systemctl daemon-reload
5 run sudo systemctl restart mysql.service
6 run sudo reboot

I have found some respite on a demo server. Will be testing on production server.

3 Likes

Hey @Muzzy. Did you try on the production server? can you please share your experience as I am also facing the same kind of RAM and SWAP consumption issue?