I dumped GCP over the same concerns that @flexy2ky was having with DO. I went to DO for a short while but could not get consistent customer service so I left. I think sometimes their decisions are a bit “random” and not really focused on providing the best service, but that was only my experience.
@bkm I must say a lot of research and work went into the Poor Man’s Backup System you built. It’s quite extensive and surely a better option than Dropbox backup system. However, is it possible to adapt this to say, a file storage service like S3 or something else.
Restoring to a backup server is by far the most sane and safe thing to do but in a case where i don’t want to set up backup servers but need hourly backup to restore where necessary, will it be possible to adapt this? (Disclaimer: I have migrated to AWS and i have created a snapshot of my server which i plan to update monthly). I don’t want to maintain multiple servers or keep an online snapshot which auto updates so in case something breaks, i can just restore from a snapshot and load the latest backup.
The ony thing different between what you want to do and what I did i the poor man system is I use a live backup (and a storage site) you just want a storage site. In that case, any storage location is fine. You problem only arises in the ability to automatically sort the backup files to keep the only the most recent. S3, like Dropbox, is only a storage location and not a full linux environment, This means that you cannot run the sorting scripts and the cron jobs from within S3.
It is only a storage location that you would have to go into to sort the files yourself. They may have some rudimentary ability to delete all but the last X number of files.
The scripts on the host would remain basically the same. The only thing you would do is replace the “scp” command with whatever command you use to transfer the file to your S3 storage.
Again you woud have to work out how to keep them sorted on the S3 box.
Altenatively, you could also get yourself a very cheap OpenVZ server that only has 1gb of memory and maybe 30gb of disk It would be able to run the sorting scripts and keep very good track of your backup files (even if there is no erpnext system there to use them. It would still be just happy storing and sorting once per hour. I have a similar server that does nothing but receive the files and sort them in case I need one someday. I tink it cost me about $24 per year for that particular server. I have only ever had to recall a file from it one time in the past year, but that one file saved me 6 days of rework so that one event was worth the $24 I paid for the year.
I use this storage server to hold the backups from 4 other erpnext sites. I give each one a directory in the default user home directory and the sorting scripts take care of the rest
Hi,
I just tried running the commands on a fresh server on Alibaba cloud. Getting these
~$sudo -H apt-get install python3-minimal
Reading package lists… Done
Building dependency tree
Reading state information… Done
python3-minimal is already the newest version (3.6.7-1~18.04).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
~$ sudo -H apt-get install python3-setuptools
Reading package lists… Done
Building dependency tree
Reading state information… Done
Package python3-setuptools is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package ‘python3-setuptools’ has no installation candidate
Could you help me finding the reason here? Probably the solution as well? Thanks in advance.
@bkm I have been doing a lot of research and based on my manual tests using the commands in the script with some modifications, both backup and sorting will work on S3. Basically S3 can serve as a backup endpoint or mid-point between production server and backup sever. I will share the full process so you can add to the poor man’s backup tutorial.
I am facing one major snag to automating the backup and sorting process though. In order to execute the copy to S3, i need elevated privileges, which means i need to add sudo to the copy command in the script. This is proving a major challenge as it requires password to execute successfully. I edited the sudoers to not require password but for some reason i am still hitting a logjam as this error pops up:
sudo: no tty present and no askpass program specified
If i can find a way to run the command within the script but with elevated privileges, my work will be as good as done and i may be able to figure out how to replicate same for other file storage services. And who knows, it may just become another backup option merged to the core if we can find a way to finetune it and make it into an app. I could therefore use your help here to overcome this snag.
I already tried actually without -H ; got same error. I then reinitialized the disk and tried the method again. This time I faced the main error again and again.
If you are running Ubuntu v18 then the only reason there would not be an install candidate would be if the pointers tot he sources was missing or corrupted.
The way to fix that is to run:
sudo apt-get update
sudo apt-get upgrade
This will find all the most updated sources and make them available to the install commands.
Hi, I think you should change your mirror list, don’t use alibabacloud mirror list. Try to find another mirror which close to you.
sudo nano /etc/apt/sources.list
then change your mirror-list, saved and exit
sudo apt-get update
then install the python3-minimal and python3-setuptools
I changed the sources and getting this error while running sudo apt-get upgrade
~$ sudo apt-get upgrade Reading package lists… Done Building dependency tree Reading state information… Done Calculating upgrade… Done The following packages were automatically installed and are no longer required:
linux-modules-4.15.0-55-generic linux-modules-extra-4.15.0-55-generic sntp* Use ‘sudo apt autoremove’ to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Then I tried to run
~$ sudo -H apt-get install redis-server Reading package lists… Done Building dependency tree Reading state information… Done The following packages were automatically installed and are no longer required:
linux-modules-4.15.0-55-generic linux-modules-extra-4.15.0-55-generic sntp* Use ‘sudo apt autoremove’ to remove them. The following additional packages will be installed:
libjemalloc1 redis-tools* Suggested packages:
ruby-redis* The following NEW packages will be installed:
libjemalloc1 redis-server redis-tools* 0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded. Need to get 634 kB of archives. After this operation, 3,012 kB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 Index of /ubuntu/ bionic/universe amd64 libjemalloc1 amd64 3.6.0-11 [82.4 kB] Get:2 Index of /ubuntu/ bionic-updates/universe amd64 redis-tools amd64 5:4.0.9-1ubuntu0.2 [516 kB] Get:3 Index of /ubuntu/ bionic-updates/universe amd64 redis-server amd64 5:4.0.9-1ubuntu0.2 [35.4 kB] Fetched 634 kB in 0s (14.2 MB/s) Selecting previously unselected package libjemalloc1. (Reading database … 145034 files and directories currently installed.) Preparing to unpack …/libjemalloc1_3.6.0-11_amd64.deb … Unpacking libjemalloc1 (3.6.0-11) … Selecting previously unselected package redis-tools. Preparing to unpack …/redis-tools_5%3a4.0.9-1ubuntu0.2_amd64.deb … Unpacking redis-tools (5:4.0.9-1ubuntu0.2) … Selecting previously unselected package redis-server. Preparing to unpack …/redis-server_5%3a4.0.9-1ubuntu0.2_amd64.deb … Unpacking redis-server (5:4.0.9-1ubuntu0.2) … Setting up libjemalloc1 (3.6.0-11) … Setting up redis-tools (5:4.0.9-1ubuntu0.2) … Setting up redis-server (5:4.0.9-1ubuntu0.2) … Job for redis-server.service failed because a timeout was exceeded. See “systemctl status redis-server.service” and “journalctl -xe” for details. invoke-rc.d: initscript redis-server, action “start” failed. ● redis-server.service - Advanced key-value store
Dec 23 13:27:16 alibaba-v12 systemd[1]: Failed to start Advanced key-value store. dpkg: error processing package redis-server (–configure):
installed redis-server package post-installation script subprocess returned error exit status 1* Processing triggers for libc-bin (2.27-3ubuntu1) … Processing triggers for systemd (237-3ubuntu10.33) … Processing triggers for man-db (2.8.3-2ubuntu0.1) … Processing triggers for ureadahead (0.100.0-21) … Errors were encountered while processing:
redis-server* E: Sub-process /usr/bin/dpkg returned an error code (1)
I think you still use alibabacloud mirror… because it still read from ( Index of /ubuntu/)
but it is okay, I think you’ve installed redis. but it can’t start…
try to edit the redis.conf
sudo nano /etc/redis/redis.conf
try to find bind 127.0.0.1 ::1 change it to bind 127.0.0.1
save and exit
sudo systemctl restart redis-server
sudo systemctl status redis-server
I get an access denied, however if I use sudo in front I can run OK but I am worried I may be creating problems later, do you have any ideas what may be causing this?
Is it important? or perhaps it is working despite the warning??
I created new user and gave sudo privileges etc as per your instructions.
Perhaps I am not in the correct place in the file structure?
Yes, adding sudo would generate problems later when you need to update bench.
However, I see that you have run the [wget] command from the root$ user directory. This is your problem.
You are supposed to only use the root login long enough to create the linux user that will then be used to run everything else. As per the post above… (see BOLD text)
Maybe you missed the preamble to the commands listed in the post. If you logout of the root user and then log back in ad the new user that you just gave sudo rights , then you should be able to get it to work.
Thanks bkm,
II need to learn the difference between a root user and the file location root.
I did log out and then back in as the new user.
The location it logged me in at was the root location.
When you load a new instance of erpnext what does the file structure look like for you?
I have root/home/username/frappe-bench on this latest environment.
When you use the [adduser] command in Ubuntu 18.x it creats a user directory in the /home directory. From that point forward, every time you login as the new user your default home directory will be /home/new_user
Therefore, the [wget] command is to be run from this new users home directory (/home/new_user)
Once the install.py file is transferred into the new users home directory, then the install command is also run from this new user directory as such: