Redis is now ready to exit, bye bye...Bench Start

Hi Team,

bench start giving Redis error… How to solve this …

bench@ip-X.X.X.X:~/erpnext$ bench start
13:11:08 system | watch.1 started (pid=1881)
13:11:08 system | worker_long.1 started (pid=1880)
13:11:08 system | web.1 started (pid=1883)
13:11:08 system | redis_socketio.1 started (pid=1884)
13:11:08 system | schedule.1 started (pid=1885)
13:11:08 system | socketio.1 started (pid=1887)
13:11:08 system | redis_queue.1 started (pid=1882)
13:11:08 system | worker_default.1 started (pid=1886)
13:11:08 system | redis_cache.1 started (pid=1888)
13:11:08 redis_queue.1 | 1902:C 16 Aug 13:11:08.423 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
13:11:08 redis_queue.1 | 1902:C 16 Aug 13:11:08.424 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=1902, just started
13:11:08 redis_queue.1 | 1902:C 16 Aug 13:11:08.424 # Configuration loaded
13:11:08 redis_queue.1 | 1902:M 16 Aug 13:11:08.425 * Increased maximum number of open files to 10032 (it was originally set to 1024).
13:11:08 system | worker_short.1 started (pid=1889)
13:11:08 redis_cache.1 | 1901:C 16 Aug 13:11:08.428 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
13:11:08 redis_cache.1 | 1901:C 16 Aug 13:11:08.428 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=1901, just started
13:11:08 redis_cache.1 | 1901:C 16 Aug 13:11:08.428 # Configuration loaded
13:11:08 redis_socketio.1 | 1907:C 16 Aug 13:11:08.429 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
13:11:08 redis_socketio.1 | 1907:C 16 Aug 13:11:08.429 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=1907, just started
13:11:08 redis_socketio.1 | 1907:C 16 Aug 13:11:08.429 # Configuration loaded
13:11:08 redis_socketio.1 | 1907:M 16 Aug 13:11:08.430 * Increased maximum number of open files to 10032 (it was originally set to 1024).
13:11:08 redis_cache.1 | 1901:M 16 Aug 13:11:08.431 * Increased maximum number of open files to 10032 (it was originally set to 1024).
13:11:08 redis_queue.1 | 1902:M 16 Aug 13:11:08.435 * Running mode=standalone, port=11000.
13:11:08 redis_queue.1 | 1902:M 16 Aug 13:11:08.436 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
13:11:08 redis_queue.1 | 1902:M 16 Aug 13:11:08.436 # Server initialized
13:11:08 redis_queue.1 | 1902:M 16 Aug 13:11:08.436 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf and then reboot or run the command ‘sysctl vm.overcommit_memory=1’ for this to take effect.
13:11:08 redis_queue.1 | 1902:M 16 Aug 13:11:08.436 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command ‘echo never > /sys/kernel/mm/transparent_hugepage/enabled’ as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
13:11:08 redis_cache.1 | 1901:M 16 Aug 13:11:08.438 * Running mode=standalone, port=13000.
13:11:08 redis_cache.1 | 1901:M 16 Aug 13:11:08.439 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
13:11:08 redis_cache.1 | 1901:M 16 Aug 13:11:08.439 # Server initialized
13:11:08 redis_cache.1 | 1901:M 16 Aug 13:11:08.439 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf and then reboot or run the command ‘sysctl vm.overcommit_memory=1’ for this to take effect.
13:11:08 redis_cache.1 | 1901:M 16 Aug 13:11:08.439 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command ‘echo never > /sys/kernel/mm/transparent_hugepage/enabled’ as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
13:11:08 redis_cache.1 | 1901:M 16 Aug 13:11:08.440 * Ready to accept connections
13:11:08 redis_queue.1 | 1902:M 16 Aug 13:11:08.440 * Ready to accept connections
13:11:08 redis_socketio.1 | 1907:M 16 Aug 13:11:08.440 * Running mode=standalone, port=12000.
13:11:08 redis_socketio.1 | 1907:M 16 Aug 13:11:08.441 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
13:11:08 redis_socketio.1 | 1907:M 16 Aug 13:11:08.441 # Server initialized
13:11:08 redis_socketio.1 | 1907:M 16 Aug 13:11:08.441 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add ‘vm.overcommit_memory = 1’ to /etc/sysctl.conf and then reboot or run the command ‘sysctl vm.overcommit_memory=1’ for this to take effect.
13:11:08 redis_socketio.1 | 1907:M 16 Aug 13:11:08.441 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command ‘echo never > /sys/kernel/mm/transparent_hugepage/enabled’ as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
13:11:08 redis_socketio.1 | 1907:M 16 Aug 13:11:08.442 * Ready to accept connections
13:11:09 socketio.1 | module.js:550
13:11:09 socketio.1 | throw err;
13:11:09 socketio.1 | ^
13:11:09 socketio.1 |
13:11:09 socketio.1 | Error: Cannot find module ‘express’
13:11:09 socketio.1 | at Function.Module._resolveFilename (module.js:548:15)
13:11:09 socketio.1 | at Function.Module._load (module.js:475:25)
13:11:09 socketio.1 | at Module.require (module.js:597:17)
13:11:09 socketio.1 | at require (internal/module.js:11:18)
13:11:09 socketio.1 | at Object. (/opt/bench/erpnext/apps/frappe/socketio.js:1:73)
13:11:09 socketio.1 | at Module._compile (module.js:653:30)
13:11:09 socketio.1 | at Object.Module._extensions…js (module.js:664:10)
13:11:09 socketio.1 | at Module.load (module.js:566:32)
13:11:09 socketio.1 | at tryModuleLoad (module.js:506:12)
13:11:09 socketio.1 | at Function.Module._load (module.js:498:3)
13:11:09 system | socketio.1 stopped (rc=1)
13:11:09 system | sending SIGTERM to redis_socketio.1 (pid 1884)
13:11:09 system | sending SIGTERM to worker_long.1 (pid 1880)
13:11:09 system | sending SIGTERM to watch.1 (pid 1881)
13:11:09 system | sending SIGTERM to redis_queue.1 (pid 1882)
13:11:09 system | sending SIGTERM to web.1 (pid 1883)
13:11:09 system | sending SIGTERM to schedule.1 (pid 1885)
13:11:09 system | sending SIGTERM to worker_default.1 (pid 1886)
13:11:09 system | sending SIGTERM to redis_cache.1 (pid 1888)
13:11:09 system | sending SIGTERM to worker_short.1 (pid 1889)
13:11:09 redis_socketio.1 | 1907:signal-handler (1565961069) Received SIGTERM scheduling shutdown…
13:11:09 redis_queue.1 | 1902:signal-handler (1565961069) Received SIGTERM scheduling shutdown…
13:11:09 redis_cache.1 | 1901:signal-handler (1565961069) Received SIGTERM scheduling shutdown…
13:11:09 redis_socketio.1 | 1907:M 16 Aug 13:11:09.043 # User requested shutdown…
13:11:09 redis_socketio.1 | 1907:M 16 Aug 13:11:09.043 * Removing the pid file.
13:11:09 redis_socketio.1 | 1907:M 16 Aug 13:11:09.043 # Redis is now ready to exit, bye bye…
13:11:09 system | redis_socketio.1 stopped (rc=-15)
13:11:09 system | web.1 stopped (rc=-15)
13:11:09 system | schedule.1 stopped (rc=-15)
13:11:09 system | worker_short.1 stopped (rc=-15)
13:11:09 system | worker_default.1 stopped (rc=-15)
13:11:09 system | watch.1 stopped (rc=-15)
13:11:09 system | worker_long.1 stopped (rc=-15)
13:11:09 redis_cache.1 | 1901:M 16 Aug 13:11:09.141 # User requested shutdown…
13:11:09 redis_cache.1 | 1901:M 16 Aug 13:11:09.141 * Removing the pid file.
13:11:09 redis_cache.1 | 1901:M 16 Aug 13:11:09.141 # Redis is now ready to exit, bye bye…
13:11:09 redis_queue.1 | 1902:M 16 Aug 13:11:09.142 # User requested shutdown…
13:11:09 redis_queue.1 | 1902:M 16 Aug 13:11:09.142 * Removing the pid file.
13:11:09 redis_queue.1 | 1902:M 16 Aug 13:11:09.142 # Redis is now ready to exit, bye bye…
13:11:09 system | redis_queue.1 stopped (rc=-15)
13:11:09 system | redis_cache.1 stopped (rc=-15)

1 Like

A search on this error above points to this Error on install ERPNext to new bench instance - #3 by Taher_Khalil

So try this - from frappe-bench run

bench setup requirements

Hi ,

when I run bench setup requirement , it got stuck on some .js file ( list.min.js)…

unable to proceed further.

for moving next step I have to first reboot my AWS instance …

How to solve this.

bench setup requirement…

outcomes are :point_down::point_down::point_down::point_down

Re your post # 3 -

I can’t say I follow what you are telling me.

Please you may want to include session output to help folks recognize context.

This may help you with clues Cannot find module 'rollup'

Hi Sir,

I followed above steps now got below error

07:14:36 socketio.1 | read: [Function] },
07:14:38 socketio.1 | request:
07:14:40 socketio.1 | Request {
07:14:40 socketio.1 | domain: null,
07:14:42 socketio.1 | _events: {},
07:14:43 socketio.1 | _eventsCount: 0,
07:14:46 socketio.1 | _maxListeners: undefined,
07:14:48 socketio.1 | _agent: false,
07:14:49 socketio.1 | _formData: null,
07:14:53 socketio.1 | method: ‘GET’,
07:14:54 socketio.1 | url: ‘http://3.16.124.136:8000/api/method/frappe.realtime.get_user_info?sid=427501baad26a46b8a37ca68c455f3a20b53c25ff552a614819162c3’,
07:14:56 socketio.1 | _header: [Object],
07:14:57 socketio.1 | header: [Object],
07:14:58 socketio.1 | writable: true,
07:15:04 socketio.1 | _redirects: 0,
07:15:05 socketio.1 | _maxRedirects: 5,
07:15:07 socketio.1 | cookies: ‘’,
07:15:07 socketio.1 | qs: {},
07:15:09 socketio.1 | _query: [],
07:15:11 socketio.1 | qsRaw: [],
07:15:11 socketio.1 | _redirectList: [],
07:15:11 socketio.1 | _streamRequest: false,
07:15:13 socketio.1 | req: [Object],
07:15:14 socketio.1 | protocol: ‘http:’,
07:15:15 socketio.1 | host: ‘3.16.124.136:8000’,
07:15:17 socketio.1 | _endCalled: true,
07:15:17 socketio.1 | _callback: [Function],
07:15:19 socketio.1 | res: [Object],
07:15:21 socketio.1 | response: [Circular],
07:15:25 socketio.1 | called: true },
07:15:28 socketio.1 | req:
07:15:29 socketio.1 | ClientRequest {
07:15:35 socketio.1 | domain: null,
07:15:37 socketio.1 | _events: [Object],
07:15:40 socketio.1 | _eventsCount: 3,
07:15:41 web.1 | Traceback (most recent call last):
07:15:43 socketio.1 | _maxListeners: undefined,
07:15:42 web.1 | File “/opt/bench/erpnext/apps/frappe/frappe/middlewares.py”, line 15, in call
07:15:44 socketio.1 | output: [],
07:15:45 web.1 | return super(StaticDataMiddleware, self).call(environ, start_response)
07:15:44 socketio.1 | outputEncodings: [],
07:15:47 web.1 | File “/opt/bench/erpnext/env/lib/python3.6/site-packages/werkzeug/middleware/shared_data.py”, line 220, in call
07:15:48 socketio.1 | outputCallbacks: [],
07:15:49 web.1 | return self.app(environ, start_response)
07:15:48 socketio.1 | outputSize: 0,
07:15:49 web.1 | File “/opt/bench/erpnext/env/lib/python3.6/site-packages/werkzeug/middleware/shared_data.py”, line 220, in call
07:15:49 socketio.1 | writable: true,
07:15:51 socketio.1 | _last: true,
07:15:51 web.1 | return self.app(environ, start_response)
07:15:51 socketio.1 | upgrading: false,
07:15:51 web.1 | File “/opt/bench/erpnext/env/lib/python3.6/site-packages/werkzeug/local.py”, line 231, in application
07:15:52 socketio.1 | chunkedEncoding: false,
07:15:53 socketio.1 | shouldKeepAlive: false,
07:15:54 web.1 | return ClosingIterator(app(environ, start_response), self.cleanup)
07:15:54 socketio.1 | useChunkedEncodingByDefault: false,
07:15:55 web.1 | File “/opt/bench/erpnext/env/lib/python3.6/site-packages/werkzeug/wrappers/base_request.py”, line 237, in application
07:15:57 socketio.1 | sendDate: false,
07:15:59 socketio.1 | _removedConnection: false,
07:15:56 web.1 | resp = f(*args[:-2] + (request,))
07:16:00 socketio.1 | _removedContLen: false,
07:16:00 socketio.1 | _removedTE: false,
07:16:02 socketio.1 | _contentLength: 0,
07:15:58 web.1 | File “/opt/bench/erpnext/apps/frappe/frappe/app.py”, line 84, in application
07:16:03 socketio.1 | _hasBody: true,
07:15:58 web.1 | rollback = after_request(rollback)
07:15:59 web.1 | File “/opt/bench/erpnext/apps/frappe/frappe/app.py”, line 203, in after_request
07:16:04 socketio.1 | _trailer: ‘’,
07:16:01 web.1 | updated_in_db = frappe.local.session_obj.update()
07:16:10 socketio.1 | finished: true,
07:16:02 web.1 | File “/opt/bench/erpnext/apps/frappe/frappe/sessions.py”, line 359, in update
07:16:04 web.1 | self.data[‘sid’]))
07:16:06 web.1 | File “/opt/bench/erpnext/apps/frappe/frappe/database/database.py”, line 156, in sql
07:16:07 web.1 | self._cursor.execute(query, values)
07:16:08 web.1 | File “/opt/bench/erpnext/env/lib/python3.6/site-packages/pymysql/cursors.py”, line 170, in execute
07:16:09 web.1 | result = self._query(query)
07:16:13 web.1 | File “/opt/bench/erpnext/env/lib/python3.6/site-packages/pymysql/cursors.py”, line 328, in _query
07:16:15 web.1 | conn.query(q)
07:16:16 web.1 | File “/opt/bench/erpnext/env/lib/python3.6/site-packages/pymysql/connections.py”, line 517, in query
07:16:16 web.1 | self._affected_rows = self._read_query_result(unbuffered=unbuffered)
07:16:17 web.1 | File “/opt/bench/erpnext/env/lib/python3.6/site-packages/pymysql/connections.py”, line 732, in _read_query_result
07:16:18 web.1 | result.read()
07:16:18 web.1 | File “/opt/bench/erpnext/env/lib/python3.6/site-packages/pymysql/connections.py”, line 1075, in read
packet_write_wait: Connection to 3.16.124.136 port 22: Broken pipe

Previously, I got same error packet_write_wait: Connection to 3.16.124.136 port 22: Broken pipe , that time team suggested to increase different packet size in my.conf , after that it was working for some time. ( which stopped working after ver 11 to 12 update , after re installation whole set of new problem started hence this case got opened)

Now following your latest solutions facing same packet write wait issue.

Another issue what I am observing here memory allocation

System have space 8G. after install ERP Next automatically around 5G allocate to below process. When I check my system disk space using df -h it show only near 2G is free. In version 11 when it was working I observed free memory was around 6G

root@ip-172-31-46-206:~# df -h

Filesystem Size Used Avail Use% Mounted on
udev 481M 0 481M 0% /dev
tmpfs 99M 760K 98M 1% /run
/dev/xvda1 7.7G 5.9G 1.9G 77% /
tmpfs 492M 0 492M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 492M 0 492M 0% /sys/fs/cgroup
/dev/loop1 89M 89M 0 100% /snap/core/7270
/dev/loop0 18M 18M 0 100% /snap/amazon-ssm-agent/1455
/dev/loop2 89M 89M 0 100% /snap/core/7396
tmpfs 99M 0 99M 0% /run/user/1000

It looks like memory is unable to reallocate again…

A broken pipe means you have network issues so you best check with your Internet Service Provider

You are confusing memory with disk storage space!?

For a test system 2 Gbytes ram memory and 20 Gbytes disk space is probably the basic minimum.

lol absolutely yes

reboot can fix your problem

Hello Thanks in advance i am facing below issue after restoring backup:

erpnext@localhost:/opt/bench/erpnext/logs$ bench start
14:17:51 system | redis_cache.1 started (pid=1038)
14:17:51 system | redis_socketio.1 started (pid=1039)
14:17:51 system | redis_queue.1 started (pid=1040)
14:17:51 system | web.1 started (pid=1041)
14:17:51 redis_socketio.1 | 1042:C 22 Nov 2021 14:17:51.035 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
14:17:51 system | socketio.1 started (pid=1053)
14:17:51 redis_cache.1 | 1044:C 22 Nov 2021 14:17:51.045 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
14:17:51 system | watch.1 started (pid=1052)
14:17:51 redis_cache.1 | 1044:C 22 Nov 2021 14:17:51.079 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1044, just started
14:17:51 redis_cache.1 | 1044:C 22 Nov 2021 14:17:51.079 # Configuration loaded
14:17:51 redis_cache.1 | 1044:M 22 Nov 2021 14:17:51.088 * Increased maximum number of open files to 10032 (it was originally set to 1024).
14:17:51 redis_socketio.1 | 1042:C 22 Nov 2021 14:17:51.088 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1042, just started
14:17:51 redis_socketio.1 | 1042:C 22 Nov 2021 14:17:51.088 # Configuration loaded
14:17:51 redis_socketio.1 | 1042:M 22 Nov 2021 14:17:51.091 * Increased maximum number of open files to 10032 (it was originally set to 1024).
14:17:51 redis_queue.1 | 1050:C 22 Nov 2021 14:17:51.094 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
14:17:51 redis_queue.1 | 1050:C 22 Nov 2021 14:17:51.095 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1050, just started
14:17:51 redis_queue.1 | 1050:C 22 Nov 2021 14:17:51.095 # Configuration loaded
14:17:51 redis_queue.1 | 1050:M 22 Nov 2021 14:17:51.097 * Increased maximum number of open files to 10032 (it was originally set to 1024).
14:17:51 redis_cache.1 | 1044:M 22 Nov 2021 14:17:51.102 # Could not create server TCP listening socket 127.0.0.1:13000: bind: Address already in use
14:17:51 system | redis_cache.1 stopped (rc=1)
14:17:51 redis_socketio.1 | 1042:M 22 Nov 2021 14:17:51.105 # Could not create server TCP listening socket 127.0.0.1:12000: bind: Address already in use
14:17:51 system | redis_socketio.1 stopped (rc=1)
14:17:51 system | worker_short.1 started (pid=1062)
14:17:51 redis_queue.1 | 1050:M 22 Nov 2021 14:17:51.126 # Could not create server TCP listening socket 127.0.0.1:11000: bind: Address already in use
14:17:51 system | redis_queue.1 stopped (rc=1)
14:17:51 system | schedule.1 started (pid=1057)
14:17:51 system | worker_default.1 started (pid=1072)
14:17:51 system | worker_long.1 started (pid=1073)
14:17:51 system | sending SIGTERM to web.1 (pid 1041)
14:17:51 system | sending SIGTERM to socketio.1 (pid 1053)
14:17:51 system | sending SIGTERM to watch.1 (pid 1052)
14:17:51 system | sending SIGTERM to schedule.1 (pid 1057)
14:17:51 system | sending SIGTERM to worker_short.1 (pid 1062)
14:17:51 system | sending SIGTERM to worker_long.1 (pid 1073)
14:17:51 system | sending SIGTERM to worker_default.1 (pid 1072)
14:17:51 system | worker_short.1 stopped (rc=-15)
14:17:51 system | schedule.1 stopped (rc=-15)
14:17:51 system | worker_long.1 stopped (rc=-15)
14:17:51 system | worker_default.1 stopped (rc=-15)
14:17:51 system | watch.1 stopped (rc=-15)
14:17:51 system | socketio.1 stopped (rc=-15)
14:17:51 system | web.1 stopped (rc=-15)
erpnext@localhost:/opt/bench/erpnext/logs$

socket 127.0.0.1:13000: bind: Address already in use
listening socket 127.0.0.1:12000: bind: Address already in use
socket 127.0.0.1:11000: bind: Address already in use

Bench is already running or try to check process id on ports and kill it
for checking process id
lsof -i :11000

and to kill process id
kill -9 {process id}

1 Like

Run this commands it solved my issue

sudo supervisorctl stop all
This command stops all processes managed by Supervisor

sudo service nginx stop
o to stop the Nginx web server. service is a command used to run System V init scripts, and in this case, it’s stopping the Nginx service

and then do
bench start

2 Likes

sudo lsof -t -i tcp:<““PORT””> -s tcp:listen | sudo xargs kill

works like charm <3