Error on ERPNext - Ubuntu 18.04 LTS Installation

Well, I appreciate the vote of confidence, but I was aided last night by some other knights in shining armor @MartinHBramwell and @warsmith

@warsmith was able to find the clues from a @Muzzy post that we needed to get this to work.

As for DO, I tried them once and found their customer service to be lacking. The service I use now is fairly quick to push issues up to the top problem solvers for me and I really appreciate them for it.

Good luck on your installs. :smiley:



This is my first support experience with them and it’s pretty awful. I’ll move over to another host over the weekend. But in the meantime, any tips on how i can set up hourly backup? I feel if i didn’t rely on daily backup i wouldn’t be in this mess.

Have you looked at this yet?

I use this to run an hourly backup of all data and and support files, then transmit them to both a second “hot spare” host and a storage host to keep multiple backups.

As for the rest of the server, I use a service that allows me to keep an image backup of a functioning server on hand. In the event of a disaster, I can open a ticket to have them restore the image to another server instance. Then I just restore one of my hourly data backups to be back online.

During the down time I just have the client switch their logins to the hot spare host until the primary host is rebuilt on new hardware.

So for example, if I have a primary host called then I would have a duplicate of that running all the time called that the users can log into in the event of a major disaster. The hot spare is never more than an hour behind.

Maybe you can do something similar.


I will defintely give this a try! Thanks a lot.

What service are you currently using? I’m still on DO and always looking for a better option also thanks for posting the backup tutorial.

W.r.t Digital Ocean …

Check this out :

VPS service providers shutting down notice - #7 by MartinHBramwell

I’m so pleased with that supplier, I’m going to keep boosting them 'til someone tells me to shut up, Disclaimer: my only connection with them is as a customer for many years.

In the USA, on the east coast I am using and on the west coast I am still experimenting with My west coast servers are not as robust as the east coast provider, so I may still move to another west coast provider later.

I am still looking for reliable servers in Chicago and and Texas but nothing worth mentioning yet.


1 Like

I dumped GCP over the same concerns that @flexy2ky was having with DO. I went to DO for a short while but could not get consistent customer service so I left. I think sometimes their decisions are a bit “random” and not really focused on providing the best service, but that was only my experience.


@bkm I must say a lot of research and work went into the Poor Man’s Backup System you built. It’s quite extensive and surely a better option than Dropbox backup system. However, is it possible to adapt this to say, a file storage service like S3 or something else.

Restoring to a backup server is by far the most sane and safe thing to do but in a case where i don’t want to set up backup servers but need hourly backup to restore where necessary, will it be possible to adapt this? (Disclaimer: I have migrated to AWS and i have created a snapshot of my server which i plan to update monthly). I don’t want to maintain multiple servers or keep an online snapshot which auto updates so in case something breaks, i can just restore from a snapshot and load the latest backup.

Let me know if this is at all possible. Thanks

The ony thing different between what you want to do and what I did i the poor man system is I use a live backup (and a storage site) you just want a storage site. In that case, any storage location is fine. You problem only arises in the ability to automatically sort the backup files to keep the only the most recent. S3, like Dropbox, is only a storage location and not a full linux environment, This means that you cannot run the sorting scripts and the cron jobs from within S3.

It is only a storage location that you would have to go into to sort the files yourself. They may have some rudimentary ability to delete all but the last X number of files.

The scripts on the host would remain basically the same. The only thing you would do is replace the “scp” command with whatever command you use to transfer the file to your S3 storage.

Again you woud have to work out how to keep them sorted on the S3 box.

Altenatively, you could also get yourself a very cheap OpenVZ server that only has 1gb of memory and maybe 30gb of disk It would be able to run the sorting scripts and keep very good track of your backup files (even if there is no erpnext system there to use them. It would still be just happy storing and sorting once per hour. I have a similar server that does nothing but receive the files and sort them in case I need one someday. I tink it cost me about $24 per year for that particular server. I have only ever had to recall a file from it one time in the past year, but that one file saved me 6 days of rework so that one event was worth the $24 I paid for the year.

I use this storage server to hold the backups from 4 other erpnext sites. I give each one a directory in the default user home directory and the sorting scripts take care of the rest

As always, Your mileage may vary… :sunglasses:


1 Like

I just tried running the commands on a fresh server on Alibaba cloud. Getting these

~$sudo -H apt-get install python3-minimal
Reading package lists… Done
Building dependency tree
Reading state information… Done
python3-minimal is already the newest version (3.6.7-1~18.04).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

~$ sudo -H apt-get install python3-setuptools
Reading package lists… Done
Building dependency tree
Reading state information… Done
Package python3-setuptools is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package ‘python3-setuptools’ has no installation candidate

Could you help me finding the reason here? Probably the solution as well? Thanks in advance.

@bkm I have been doing a lot of research and based on my manual tests using the commands in the script with some modifications, both backup and sorting will work on S3. Basically S3 can serve as a backup endpoint or mid-point between production server and backup sever. I will share the full process so you can add to the poor man’s backup tutorial.

I am facing one major snag to automating the backup and sorting process though. In order to execute the copy to S3, i need elevated privileges, which means i need to add sudo to the copy command in the script. This is proving a major challenge as it requires password to execute successfully. I edited the sudoers to not require password but for some reason i am still hitting a logjam as this error pops up:

sudo: no tty present and no askpass program specified

If i can find a way to run the command within the script but with elevated privileges, my work will be as good as done and i may be able to figure out how to replicate same for other file storage services. And who knows, it may just become another backup option merged to the core if we can find a way to finetune it and make it into an app. I could therefore use your help here to overcome this snag.

Do without -H & python3.

just do

sudo apt-get install python-minimal

sudo apt-get install build-essential python-setuptools

sudo python --production --version 11 --user (frappe-user)

Rest follow the steps mentioned by bkm

1 Like

I already tried actually without -H ; got same error. I then reinitialized the disk and tried the method again. This time I faced the main error again and again.

If you are running Ubuntu v18 then the only reason there would not be an install candidate would be if the pointers tot he sources was missing or corrupted.

The way to fix that is to run:

sudo apt-get update
sudo apt-get upgrade

This will find all the most updated sources and make them available to the install commands.

Hope this helps.


You’re a hero! I’ve been battling through this for a couple days now and ran across your post. Worked like a charm! Thanks for the assist

Hi, I think you should change your mirror list, don’t use alibabacloud mirror list. Try to find another mirror which close to you.
sudo nano /etc/apt/sources.list
then change your mirror-list, saved and exit
sudo apt-get update
then install the python3-minimal and python3-setuptools

it worked for me


I changed the sources and getting this error while running sudo apt-get upgrade

~$ sudo apt-get upgrade
Reading package lists… Done
Building dependency tree
Reading state information… Done
Calculating upgrade… Done
The following packages were automatically installed and are no longer required:

  • libopts25 linux-headers-4.15.0-55 linux-headers-4.15.0-55-generic linux-image-4.15.0-55-generic*
  • linux-modules-4.15.0-55-generic linux-modules-extra-4.15.0-55-generic sntp*
    Use ‘sudo apt autoremove’ to remove them.
    0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Then I tried to run

~$ sudo -H apt-get install redis-server
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following packages were automatically installed and are no longer required:

  • libopts25 linux-headers-4.15.0-55 linux-headers-4.15.0-55-generic linux-image-4.15.0-55-generic*
  • linux-modules-4.15.0-55-generic linux-modules-extra-4.15.0-55-generic sntp*
    Use ‘sudo apt autoremove’ to remove them.
    The following additional packages will be installed:
  • libjemalloc1 redis-tools*
    Suggested packages:
  • ruby-redis*
    The following NEW packages will be installed:
  • libjemalloc1 redis-server redis-tools*
    0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
    Need to get 634 kB of archives.
    After this operation, 3,012 kB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 Index of /ubuntu/ bionic/universe amd64 libjemalloc1 amd64 3.6.0-11 [82.4 kB]
    Get:2 Index of /ubuntu/ bionic-updates/universe amd64 redis-tools amd64 5:4.0.9-1ubuntu0.2 [516 kB]
    Get:3 Index of /ubuntu/ bionic-updates/universe amd64 redis-server amd64 5:4.0.9-1ubuntu0.2 [35.4 kB]
    Fetched 634 kB in 0s (14.2 MB/s)
    Selecting previously unselected package libjemalloc1.
    (Reading database … 145034 files and directories currently installed.)
    Preparing to unpack …/libjemalloc1_3.6.0-11_amd64.deb …
    Unpacking libjemalloc1 (3.6.0-11) …
    Selecting previously unselected package redis-tools.
    Preparing to unpack …/redis-tools_5%3a4.0.9-1ubuntu0.2_amd64.deb …
    Unpacking redis-tools (5:4.0.9-1ubuntu0.2) …
    Selecting previously unselected package redis-server.
    Preparing to unpack …/redis-server_5%3a4.0.9-1ubuntu0.2_amd64.deb …
    Unpacking redis-server (5:4.0.9-1ubuntu0.2) …
    Setting up libjemalloc1 (3.6.0-11) …
    Setting up redis-tools (5:4.0.9-1ubuntu0.2) …
    Setting up redis-server (5:4.0.9-1ubuntu0.2) …
    Job for redis-server.service failed because a timeout was exceeded.
    See “systemctl status redis-server.service” and “journalctl -xe” for details.
    invoke-rc.d: initscript redis-server, action “start” failed.
    ● redis-server.service - Advanced key-value store
  • Loaded: loaded (/lib/systemd/system/redis-server.service; disabled; vendor preset: enabled)*
  • Active: activating (auto-restart) (Result: timeout) since Mon 2019-12-23 13:27:16 CST; 24ms ago*
  • Docs:,*
  •       man:redis-server(1)*
  • Process: 995 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCCESS)*

Dec 23 13:27:16 alibaba-v12 systemd[1]: Failed to start Advanced key-value store.
dpkg: error processing package redis-server (–configure):

  • installed redis-server package post-installation script subprocess returned error exit status 1*
    Processing triggers for libc-bin (2.27-3ubuntu1) …
    Processing triggers for systemd (237-3ubuntu10.33) …
    Processing triggers for man-db (2.8.3-2ubuntu0.1) …
    Processing triggers for ureadahead (0.100.0-21) …
    Errors were encountered while processing:
  • redis-server*
    E: Sub-process /usr/bin/dpkg returned an error code (1)


I think you still use alibabacloud mirror… because it still read from ( Index of /ubuntu/)
but it is okay, I think you’ve installed redis. but it can’t start…
try to edit the redis.conf
sudo nano /etc/redis/redis.conf
try to find bind ::1 change it to bind
save and exit
sudo systemctl restart redis-server
sudo systemctl status redis-server

1 Like


It’s actually Amazon’s EC2 mirror; I also tried my local mirrors.

Anyway, changing the conf file seems started the server. Going ahead with the rest code! Thanks :slight_smile: