ERROR on Restoring last backup

I grabbed the most recent backup from the ~/frappe-bench/sites/site1.local/private/backups/ directory and restored it to my test server. It generated the following errors:

MySQL root password:
Traceback (most recent call last):
File “/usr/lib/python2.7/runpy.py”, line 174, in _run_module_as_main
main”, fname, loader, pkg_name)
File “/usr/lib/python2.7/runpy.py”, line 72, in _run_code
exec code in run_globals
File “/home/myuserid/frappe-bench/apps/frappe/frappe/utils/bench_helper.py”, line 94, i n
main()
File “/home/myuserid/frappe-bench/apps/frappe/frappe/utils/bench_helper.py”, line 18, i n main
click.Group(commands=commands)(prog_name=‘bench’)
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/click/core.py”, line 722, in call
return self.main(*args, **kwargs)
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/click/core.py”, line 697, in main
rv = self.invoke(ctx)
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/click/core.py”, line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/click/core.py”, line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/click/core.py”, line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/click/core.py”, line 535, in invoke
return callback(*args, **kwargs)
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/click/decorator s.py”, line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File “/home/myuserid/frappe-bench/apps/frappe/frappe/commands/init.py”, line 24, in _func
ret = f(frappe._dict(ctx.obj), *args, **kwargs)
File “/home/myuserid/frappe-bench/apps/frappe/frappe/commands/site.py”, line 115, in re store
force=context.force)
File “/home/myuserid/frappe-bench/apps/frappe/frappe/commands/site.py”, line 65, in _ne w_site
admin_password=admin_password, verbose=verbose, source_sql=source_sql,force=force, r einstall=reinstall)
File “/home/myuserid/frappe-bench/apps/frappe/frappe/installer.py”, line 43, in install _db
if not ‘tabDefaultValue’ in frappe.db.get_tables():
File “/home/myuserid/frappe-bench/apps/frappe/frappe/database.py”, line 784, in get_tab les
return [d[0] for d in self.sql(“show tables”)]
File “/home/myuserid/frappe-bench/apps/frappe/frappe/database.py”, line 176, in sql
self._cursor.execute(query)
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/cursors .py”, line 165, in execute
result = self._query(query)
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/cursors .py”, line 321, in _query
conn.query(q)
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connect ions.py”, line 860, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connect ions.py”, line 1061, in _read_query_result
result.read()
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connect ions.py”, line 1349, in read
first_packet = self.connection._read_packet()
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connect ions.py”, line 991, in _read_packet
packet_header = self._read_bytes(4)
File “/home/myuserid/frappe-bench/env/local/lib/python2.7/site-packages/pymysql/connect ions.py”, line 1037, in _read_bytes
CR.CR_SERVER_LOST, “Lost connection to MySQL server during query”)
pymysql.err.OperationalError: (2013, ‘Lost connection to MySQL server during query’)
myuserid@ubuntu16:~

.
.
.
The above trace is everything that following where the process asks for the mysql administrator password. The backup file used was one that ERPNext generates automatically every 6 hours and drops in the backups directory of the site. It had been gunzipped and then used in the restore command.
Other supporting information:

  • The target test server is actually a mirror image of the original server that was created from a complete disk image if the original server. It is on another ip address and has had a new FQDN name assigned to it using the “bench setup add-domain” command followed by the proper nginx setup and reload.

  • The restore process took about 11 minutes to eventually dump to the above error trace. On any other test server where I might restore this same backup, the process usually on;y takes about 90 seconds.

  • The target server being an exact clone of the original server, everything is the same including the randomized database name and database password as proved by comparing the site_config.json files on the 2 servers.

  • Even though it takes a very long time to get to the error dump, once I log into the site as a erpnext user and check all the data, it appears that everything in the database backup was restored and working. However, the extreme processing time and the long error trace make me suspect there is something still quite wrong.

.
.
So, with all this in mind, I wonder if there may be some unknown flag conflict of some sort that doesn’t transfer well in a disk image restore to a new or different server box.

Is it possible to delete the database for site1.local and then regenerate it from the backup so that everything is configured properly again?

Or is there some other fix that one of you might see in the trace listing that was not able to understand?

I am pretty sure this is all related to my unique test server condition and probably not common, but I would be very appreciative if anyone with any mysql knowledge could help me find a way to correct for this.

Thank you all in advance for reading and applying your thoughts to my unique problem.

BKM

just bumped over this after getting the same Lost connection to MySQL server during query-error after a bench --site [sitename] --force restore [/link/to/database.sql]

it seems the restore process finished though. The instance seems to work completely normal with the updated data.

That is also true for me.

I get this error every Sunday on one of my backup servers. The only difference between this server and the others is that it appears to very slow at everything I do on it. I believe there are other virtual servers on this hardware that are consuming most of the system resources and leaving my backup server with very little in CPU time.

The best that I can figure is that the error occurs at some point during the process but is not reported to the console because the script eventually gets an answer to its query and the process continues. With the error then buffered it only shows up after everything else in the script with higher priorities is finished running.

Just a theory, but not matter how many times it happens, my data is always accurate in the restored server. I think you can only see this if you have a very slow server.

BKM