i am new here too. so here is what i know/think till now:
the two warning in your log can be temporarely removed by
sudo bash -c “echo never > /sys/kernel/mm/transparent_hugepage/enabled”
sudo sysctl vm.overcommit_memory=1
but they are all only warnings, not critical!
same for, as you can see in my log (for you to compare), my io.open warning with bufsize.
or also your redis or bench warnings.
erpnext@erpnext:/opt/bench/erpnext$ bench start
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
09:25:36 system | redis_socketio.1 started (pid=6874)
09:25:36 system | redis_cache.1 started (pid=6875)
09:25:36 redis_cache.1 | 6881:C 26 Nov 2021 09:25:36.172 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
09:25:36 system | redis_queue.1 started (pid=6876)
09:25:36 system | web.1 started (pid=6877)
09:25:36 redis_cache.1 | 6881:C 26 Nov 2021 09:25:36.193 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=6881, just started
09:25:36 redis_cache.1 | 6881:C 26 Nov 2021 09:25:36.196 # Configuration loaded
09:25:36 redis_socketio.1 | 6878:C 26 Nov 2021 09:25:36.202 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
09:25:36 redis_socketio.1 | 6878:C 26 Nov 2021 09:25:36.202 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=6878, just started
09:25:36 redis_socketio.1 | 6878:C 26 Nov 2021 09:25:36.202 # Configuration loaded
09:25:36 redis_socketio.1 | 6878:M 26 Nov 2021 09:25:36.203 * Increased maximum number of open files to 10032 (it was originally set to 1024).
09:25:36 redis_queue.1 | 6882:C 26 Nov 2021 09:25:36.204 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
09:25:36 redis_queue.1 | 6882:C 26 Nov 2021 09:25:36.204 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=6882, just started
09:25:36 redis_queue.1 | 6882:C 26 Nov 2021 09:25:36.204 # Configuration loaded
09:25:36 redis_queue.1 | 6882:M 26 Nov 2021 09:25:36.205 * Increased maximum number of open files to 10032 (it was originally set to 1024).
09:25:36 redis_queue.1 | 6882:M 26 Nov 2021 09:25:36.208 * Running mode=standalone, port=11000.
09:25:36 system | socketio.1 started (pid=6887)
09:25:36 redis_cache.1 | 6881:M 26 Nov 2021 09:25:36.213 * Increased maximum number of open files to 10032 (it was originally set to 1024).
09:25:36 redis_cache.1 | 6881:M 26 Nov 2021 09:25:36.215 * Running mode=standalone, port=13000.
09:25:36 system | schedule.1 started (pid=6888)
09:25:36 redis_cache.1 | 6881:M 26 Nov 2021 09:25:36.215 # Server initialized
09:25:36 redis_cache.1 | 6881:M 26 Nov 2021 09:25:36.216 * Ready to accept connections
09:25:36 redis_socketio.1 | 6878:M 26 Nov 2021 09:25:36.222 * Running mode=standalone, port=12000.
09:25:36 redis_socketio.1 | 6878:M 26 Nov 2021 09:25:36.223 # Server initialized
09:25:36 redis_socketio.1 | 6878:M 26 Nov 2021 09:25:36.223 * Ready to accept connections
09:25:36 system | watch.1 started (pid=6889)
09:25:36 redis_queue.1 | 6882:M 26 Nov 2021 09:25:36.224 # Server initialized
09:25:36 redis_queue.1 | 6882:M 26 Nov 2021 09:25:36.224 * Ready to accept connections
09:25:36 system | worker_long.1 started (pid=6906)
09:25:36 system | worker_short.1 started (pid=6905)
09:25:36 system | worker_default.1 started (pid=6907)
09:25:38 socketio.1 | listening on *: 9000
09:25:40 watch.1 |
09:25:41 web.1 | * Running on http://0.0.0.0:8000/ (Press CTRL+C to quit)
09:25:41 web.1 | * Restarting with stat
09:25:41 web.1 | * Debugger is active!
09:25:41 web.1 | * Debugger PIN: 396-672-958
09:25:42 web.1 | 192.168.2.232 - - [26/Nov/2021 09:25:42] "GET /api/method/frappe.realtime.get_user_info?sid=906639ed0d7bdaff9cf7a2ddb9d3580a7ad62c07bf17a7153183137e HTTP/1.1" 200 -
09:25:42 watch.1 | yarn run v1.22.15
09:25:42 watch.1 | $ node esbuild --watch
09:25:45 watch.1 | clean: postcss.plugin was deprecated. Migration guide:
09:25:45 watch.1 | https://evilmartians.com/chronicles/postcss-8-plugin-migration
ah yes, and the two identical get http-requests dont explain this bug.
imho, this is a buggy js inside of erpnext.
best example is still:
‘already loaded’ side menu click (with doubled content bug)
vs
edit+discard (no bug, even removes previously doubled displayed content)
=>both with no data retrieving from server (http-request), only client side.
strange… that noone else is experiencing this, or can say something.
i will do a new try next week and install a second vm.