If you are using the Frappe/ERPNext cloud service, then I believe that you are limited to only what you can get their service department to provide you with on a “one off” basis.
However, if you are hosting it yourself on an alternate VPS service, then you can easily make a backup of your entire server and restore it to another VPS server of the same specifications (usually on the same vendor service). I do this all the time even now. I maintain at least 3 VPS servers for every client.
- The first one is their primary “Live” server for daily operation
- The second one is a backup server that can be made active in less than 30 min in the event of a catastrophic failure of the main LIVE server.
- The third one is a sandbox for the client to teach their staff and work out new workflow issues without impacting their live daily work.
Aside from the main Live server, the second one is the most important. I use a crontab job entry to run a manual data backup every hour during the business day (11 hrs from 7am to 7pm) and then a final one for the day at 2 minutes before midnight, This is a total of 12 backups for the day.
Those backups are created in a specific directory in the default user directory. Part of the script that creates the backups, also takes care of moving the last backup to a different directory before dropping in the new one. It also uses the “scp” command to move a copy of the most recent backup from the live server to the backup server.
On the backup server, there are other crontab job scripts that detect the transferred file and manage a regulated storage scheme while also setting up the most recent backup file in a specific directory so that it could be restored quickly and the server restarted to take the place of the main live server in the event of a catastrophic failure.
All this means that “if” a failure occurs, I can get them running again on the backup server in less than 30 min and at worst case they are only missing an hour worth of work. This beats losing everything and having to wait several hours or more and maybe losing more data.
I know there are more expensive ways of doing this using server mirroring and automatic switch-over, etc. but those are expensive to setup and even more expensive to maintain. My simpler solution is not as wonderful as the auto-switching method, but it gets the job done at about 2% or 3% of the cost of setting up and maintaining the more expensive solutions… and for what exactly?!? Maybe only 3 minutes down time and little or no loss of data versus possibly having to repeat up to an hour worth of work? When given the choice almost every one of my clients chooses the cheaper method.
Anyway, here is the command I use for running the backup every hour:
mysqldump -u root -pMySqlPassword 1ba3e0274ad89191 | gzip > /home/DefUser/backups/current/"$(date '+%m%d-%H%M').sql.gz"
I put that command in my script and it generates the backup file in gzip form just like the built-in backup feature does. The strange string following the “MySqlPassword” is the actual name of the sql file in the system. You can find this in the site_config.json file in your sites default directory.
Here is the command that I use to transfer the file to the backup host:
scp /home/DefUser/backups/current/*.sql.gz DefUser@192.168.222.111:/home/DefUser/drop/
Of course you would have to use your specific default user account and replace the made-up ip address here with the real ip address of your backup host.
There are a few more things that need to be setup to make this work, but you should pretty much get the idea from this.
Finally, I have a script that sends the last backup file of the week to the sandbox server and as part of my regular maintenance I restore that file to the sandbox server on Monday mornings so they have a fresh set of data to experiment and train from for the week.
Hope this helps…
BKM