Error on ERPNext - Ubuntu 18.04 LTS Installation

W.r.t Digital Ocean …

Check this out :

VPS service providers shutting down notice - #7 by MartinHBramwell

I’m so pleased with that supplier, I’m going to keep boosting them 'til someone tells me to shut up, Disclaimer: my only connection with them is as a customer for many years.

In the USA, on the east coast I am using and on the west coast I am still experimenting with My west coast servers are not as robust as the east coast provider, so I may still move to another west coast provider later.

I am still looking for reliable servers in Chicago and and Texas but nothing worth mentioning yet.


1 Like

I dumped GCP over the same concerns that @flexy2ky was having with DO. I went to DO for a short while but could not get consistent customer service so I left. I think sometimes their decisions are a bit “random” and not really focused on providing the best service, but that was only my experience.


@bkm I must say a lot of research and work went into the Poor Man’s Backup System you built. It’s quite extensive and surely a better option than Dropbox backup system. However, is it possible to adapt this to say, a file storage service like S3 or something else.

Restoring to a backup server is by far the most sane and safe thing to do but in a case where i don’t want to set up backup servers but need hourly backup to restore where necessary, will it be possible to adapt this? (Disclaimer: I have migrated to AWS and i have created a snapshot of my server which i plan to update monthly). I don’t want to maintain multiple servers or keep an online snapshot which auto updates so in case something breaks, i can just restore from a snapshot and load the latest backup.

Let me know if this is at all possible. Thanks

The ony thing different between what you want to do and what I did i the poor man system is I use a live backup (and a storage site) you just want a storage site. In that case, any storage location is fine. You problem only arises in the ability to automatically sort the backup files to keep the only the most recent. S3, like Dropbox, is only a storage location and not a full linux environment, This means that you cannot run the sorting scripts and the cron jobs from within S3.

It is only a storage location that you would have to go into to sort the files yourself. They may have some rudimentary ability to delete all but the last X number of files.

The scripts on the host would remain basically the same. The only thing you would do is replace the “scp” command with whatever command you use to transfer the file to your S3 storage.

Again you woud have to work out how to keep them sorted on the S3 box.

Altenatively, you could also get yourself a very cheap OpenVZ server that only has 1gb of memory and maybe 30gb of disk It would be able to run the sorting scripts and keep very good track of your backup files (even if there is no erpnext system there to use them. It would still be just happy storing and sorting once per hour. I have a similar server that does nothing but receive the files and sort them in case I need one someday. I tink it cost me about $24 per year for that particular server. I have only ever had to recall a file from it one time in the past year, but that one file saved me 6 days of rework so that one event was worth the $24 I paid for the year.

I use this storage server to hold the backups from 4 other erpnext sites. I give each one a directory in the default user home directory and the sorting scripts take care of the rest

As always, Your mileage may vary… :sunglasses:


1 Like

I just tried running the commands on a fresh server on Alibaba cloud. Getting these

~$sudo -H apt-get install python3-minimal
Reading package lists… Done
Building dependency tree
Reading state information… Done
python3-minimal is already the newest version (3.6.7-1~18.04).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

~$ sudo -H apt-get install python3-setuptools
Reading package lists… Done
Building dependency tree
Reading state information… Done
Package python3-setuptools is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package ‘python3-setuptools’ has no installation candidate

Could you help me finding the reason here? Probably the solution as well? Thanks in advance.

@bkm I have been doing a lot of research and based on my manual tests using the commands in the script with some modifications, both backup and sorting will work on S3. Basically S3 can serve as a backup endpoint or mid-point between production server and backup sever. I will share the full process so you can add to the poor man’s backup tutorial.

I am facing one major snag to automating the backup and sorting process though. In order to execute the copy to S3, i need elevated privileges, which means i need to add sudo to the copy command in the script. This is proving a major challenge as it requires password to execute successfully. I edited the sudoers to not require password but for some reason i am still hitting a logjam as this error pops up:

sudo: no tty present and no askpass program specified

If i can find a way to run the command within the script but with elevated privileges, my work will be as good as done and i may be able to figure out how to replicate same for other file storage services. And who knows, it may just become another backup option merged to the core if we can find a way to finetune it and make it into an app. I could therefore use your help here to overcome this snag.

Do without -H & python3.

just do

sudo apt-get install python-minimal

sudo apt-get install build-essential python-setuptools

sudo python --production --version 11 --user (frappe-user)

Rest follow the steps mentioned by bkm

1 Like

I already tried actually without -H ; got same error. I then reinitialized the disk and tried the method again. This time I faced the main error again and again.

If you are running Ubuntu v18 then the only reason there would not be an install candidate would be if the pointers tot he sources was missing or corrupted.

The way to fix that is to run:

sudo apt-get update
sudo apt-get upgrade

This will find all the most updated sources and make them available to the install commands.

Hope this helps.


You’re a hero! I’ve been battling through this for a couple days now and ran across your post. Worked like a charm! Thanks for the assist

Hi, I think you should change your mirror list, don’t use alibabacloud mirror list. Try to find another mirror which close to you.
sudo nano /etc/apt/sources.list
then change your mirror-list, saved and exit
sudo apt-get update
then install the python3-minimal and python3-setuptools

it worked for me


I changed the sources and getting this error while running sudo apt-get upgrade

~$ sudo apt-get upgrade
Reading package lists… Done
Building dependency tree
Reading state information… Done
Calculating upgrade… Done
The following packages were automatically installed and are no longer required:

  • libopts25 linux-headers-4.15.0-55 linux-headers-4.15.0-55-generic linux-image-4.15.0-55-generic*
  • linux-modules-4.15.0-55-generic linux-modules-extra-4.15.0-55-generic sntp*
    Use ‘sudo apt autoremove’ to remove them.
    0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Then I tried to run

~$ sudo -H apt-get install redis-server
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following packages were automatically installed and are no longer required:

  • libopts25 linux-headers-4.15.0-55 linux-headers-4.15.0-55-generic linux-image-4.15.0-55-generic*
  • linux-modules-4.15.0-55-generic linux-modules-extra-4.15.0-55-generic sntp*
    Use ‘sudo apt autoremove’ to remove them.
    The following additional packages will be installed:
  • libjemalloc1 redis-tools*
    Suggested packages:
  • ruby-redis*
    The following NEW packages will be installed:
  • libjemalloc1 redis-server redis-tools*
    0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
    Need to get 634 kB of archives.
    After this operation, 3,012 kB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 Index of /ubuntu/ bionic/universe amd64 libjemalloc1 amd64 3.6.0-11 [82.4 kB]
    Get:2 Index of /ubuntu/ bionic-updates/universe amd64 redis-tools amd64 5:4.0.9-1ubuntu0.2 [516 kB]
    Get:3 Index of /ubuntu/ bionic-updates/universe amd64 redis-server amd64 5:4.0.9-1ubuntu0.2 [35.4 kB]
    Fetched 634 kB in 0s (14.2 MB/s)
    Selecting previously unselected package libjemalloc1.
    (Reading database … 145034 files and directories currently installed.)
    Preparing to unpack …/libjemalloc1_3.6.0-11_amd64.deb …
    Unpacking libjemalloc1 (3.6.0-11) …
    Selecting previously unselected package redis-tools.
    Preparing to unpack …/redis-tools_5%3a4.0.9-1ubuntu0.2_amd64.deb …
    Unpacking redis-tools (5:4.0.9-1ubuntu0.2) …
    Selecting previously unselected package redis-server.
    Preparing to unpack …/redis-server_5%3a4.0.9-1ubuntu0.2_amd64.deb …
    Unpacking redis-server (5:4.0.9-1ubuntu0.2) …
    Setting up libjemalloc1 (3.6.0-11) …
    Setting up redis-tools (5:4.0.9-1ubuntu0.2) …
    Setting up redis-server (5:4.0.9-1ubuntu0.2) …
    Job for redis-server.service failed because a timeout was exceeded.
    See “systemctl status redis-server.service” and “journalctl -xe” for details.
    invoke-rc.d: initscript redis-server, action “start” failed.
    ● redis-server.service - Advanced key-value store
  • Loaded: loaded (/lib/systemd/system/redis-server.service; disabled; vendor preset: enabled)*
  • Active: activating (auto-restart) (Result: timeout) since Mon 2019-12-23 13:27:16 CST; 24ms ago*
  • Docs:,*
  •       man:redis-server(1)*
  • Process: 995 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCCESS)*

Dec 23 13:27:16 alibaba-v12 systemd[1]: Failed to start Advanced key-value store.
dpkg: error processing package redis-server (–configure):

  • installed redis-server package post-installation script subprocess returned error exit status 1*
    Processing triggers for libc-bin (2.27-3ubuntu1) …
    Processing triggers for systemd (237-3ubuntu10.33) …
    Processing triggers for man-db (2.8.3-2ubuntu0.1) …
    Processing triggers for ureadahead (0.100.0-21) …
    Errors were encountered while processing:
  • redis-server*
    E: Sub-process /usr/bin/dpkg returned an error code (1)


I think you still use alibabacloud mirror… because it still read from ( Index of /ubuntu/)
but it is okay, I think you’ve installed redis. but it can’t start…
try to edit the redis.conf
sudo nano /etc/redis/redis.conf
try to find bind ::1 change it to bind
save and exit
sudo systemctl restart redis-server
sudo systemctl status redis-server

1 Like


It’s actually Amazon’s EC2 mirror; I also tried my local mirrors.

Anyway, changing the conf file seems started the server. Going ahead with the rest code! Thanks :slight_smile:

ok. thanks

Nah; Got these errors trying when I tried to install for the second time.

RUNNING HANDLER [mariadb : restart mysql] ***************************************************************************************
task path: /tmp/.bench/playbooks/roles/mariadb/handlers/main.yml:2

PLAY RECAP **********************************************************************************************************************
localhost                  : ok=23   changed=5    unreachable=0    failed=1    skipped=16   rescued=0    ignored=0

Traceback (most recent call last):
  File "", line 413, in <module>
  File "", line 135, in install_bench
    run_playbook('site.yml', sudo=True, extra_vars=extra_vars)
  File "", line 327, in run_playbook
    success = subprocess.check_call(args, cwd=os.path.join(cwd, 'playbooks'))
  File "/usr/lib/python3.6/", line 311, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['ansible-playbook', '-c', 'local', 'site.yml', '-vvvv', '-e', '@/tmp/extra_vars.json', '--become', '--become-user=farid']' returned non-zero exit status 2.

Following you great instructions an keep getting an issue at the step


I get an access denied, however if I use sudo in front I can run OK but I am worried I may be creating problems later, do you have any ideas what may be causing this?
Is it important? or perhaps it is working despite the warning??
I created new user and gave sudo privileges etc as per your instructions.
Perhaps I am not in the correct place in the file structure?

this is the output

my user name@erpnext1:/root$ wget
pathconf: Permission denied

Connecting to (||:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 14207 (14K) [text/plain] Permission denied

Cannot write to ‘’ (Success).


Yes, adding sudo would generate problems later when you need to update bench.

However, I see that you have run the [wget] command from the root$ user directory. This is your problem.

You are supposed to only use the root login long enough to create the linux user that will then be used to run everything else. As per the post above… (see BOLD text)

Maybe you missed the preamble to the commands listed in the post. If you logout of the root user and then log back in ad the new user that you just gave sudo rights , then you should be able to get it to work.

Most important though…

  • Start with a fresh Ubuntu 18.x server
  • Do the apt-get update & the apt-get upgrade
  • Create the new user and assign them sudo rights
  • LOGOUT of the root user and LOGIN as the new user

Then follow the commands


1 Like

Thanks bkm,
II need to learn the difference between a root user and the file location root.
I did log out and then back in as the new user.
The location it logged me in at was the root location.

When you load a new instance of erpnext what does the file structure look like for you?
I have root/home/username/frappe-bench on this latest environment.

From what file location do you suggest I run the wget
I presume at this stage the frappe-bench folder does not yet exist.

I suspect my earlier mistake resulted in permission issues which I rectified by
sudo chown -R user:user name /home/user

Now up and running V12 on a new server again.

Thanks for your help. Practice makes perfect.

Next job is to create another new V12 and open new site from backup of V11…