If you use 'scp' to move backup files **Please Read This**

I have been using the linux command ‘scp’ for a while now to move copies of my scheduled backup files from my live production server to several other servers across the internet for safe keeping.

However, I recently ran into a hiccup with how I have been implementing this command. I almost all cases up to this point I had been using Ubuntu 16.04 LTS linux and mostly on the same cloud services vendor networks. This led me to make an assumption about using this command within a cron triggered script that is not always accurate.

Up to this point, I only had to SSH into my source host (where I copied the files from) and issue the scp command once manually to the target host server and the unique keys would be automatically stored for me and the command would work flawlessly from that point forward from the script without having to manually enter the target host password again.

Anyway… I recently discovered that this “one-time manual” entry of the command does not always store away the appropriate keys that allows it work from a script later. This became apparent when I tried this procedure on alternate linux flavors and versions. So, just in case anyone else here was having an issue getting the ‘scp’ command to work from a script I wanted to pass along the proper way to make it work every time.

I found this on an Oracle Blog by an author named “Jayakara Kini” and even this author credits an unknown “Guest Author” for the procedure. Here is a LINK to the blog. (NOTE: When I rechecked this blog link on 07-29-2022 I found author “Jayakara Kini” no longer had this article posted. I leave the link here for posterity or until he reconnects the link to the article - BKM)

I know that old information like this tends to disappear from blog sites so for the sake of preservation I will post the contents here as well.

Hopefully there are others that will benefit from my endless searching for answers. Please do your own research as well and use only what you need to get everything working properly. :sunglasses:


How To scp, ssh and rsync without prompting for password

Whenever you need to use scp to copy files, it asks for passwords.
Same with rsync as it (by default) uses ssh as well.

Usually scp and rsync commands are used to transfer or backup files between known hosts or by the same user on both the hosts. It can get really annoying if the password is asked every time. I even had the idea of writing an expect script to provide the password.

Of course, I didn’t. Instead I browsed for a
solution and found it after quite some time. There are already a
couple of links out there which talk about it. I am adding to it…

Lets say you want to copy between two hosts host_src
and host_dest.
is the host where you would run the scp, ssh or rsync
command, irrespective of the direction of the file copy!

1. On host_src,
run this command as the user that runs scp/ssh/rsync

    $ ssh-keygen -t rsa

This will prompt for a passphrase. Just press the enter key. It’ll then generate an identification (private key) and a public key. Do not ever share the private key with anyone!

$ ssh-keygen

shows where it saved the public key. This is by default ~/.ssh/id_rsa.pub:

`Your public key has been saved in <your_home_dir>/.ssh/id_rsa.pub`

2. Transfer the id_rsa.pub file to host_dest by either ftp, scp, rsync or any other method.

3. On host_dest,
login as the remote user which you plan to use when you run scp, ssh or rsync on host_src.

4. Copy the contents of id_rsa.pub to ~/.ssh/authorized_keys with the following command:

$ cat id_rsa.pub >>~/.ssh/authorized_keys
$ chmod 700 ~/.ssh/authorized_keys

If this file does not exists, then the above command will create it. Make sure you remove permission for others to read this file.
If its a public key, why prevent others from reading this file?
Probably, the owner of the key has distributed it to a few trusted users and has not placed any additional security measures to check if its really a trusted user.

5. Note that ssh by default does not allow root to log in.
This has to be explicitly enabled on host_dest.

This can be done by editing /etc/ssh/sshd_config and changing the option of PermitRootLogin seting from no to yes.

Don’t forget to restart sshd so that it reads the modified config file. Do this only
if you want to use the root login.

Well, that’s it. Now you can run scp, ssh and rsync on
connecting to host_dest
and it won’t prompt for the password.

Note that this will still prompt for the password if you are running
the commands on host_dest
connecting to host_src.

You can reverse the steps above (generate the public key on host_dest
and copy it to host_src) and you have a two way setup ready!


Have you looked into SSH certificates?

I have yet to do it, but I understand it like this:


  • have a certificate like they do with PGP
  • specify a validating CA


  • have a certificate like they do with SSL
  • specify a validating CA
  • get a list of permitted Users

SSH establishes a connection if each side accepts the other’s certificate.


  • no more passing around of public keys
  • unlike private keys, certs can expire, or be killed if compromised

Nice article: If You're Not Using SSH Certificates You're Doing SSH Wrong | Smallstep Blog

I like the certificate idea, but I am not a fan of those certificates expiring.
I use the public keys because the scp tool is used as part of my Poor Man’s backup tutorial.

Using the keys, I set them once and never have to revisit the process.

Can I do the same with the certificates?


You create a certificate from a user’s key pair with a command like …

ssh-keygen -s ca_user_key –I jsmith –n john –V +52w .ssh/id_rsa.pub

The switch -V -52w sets a 52 week expiry.

From, [ man ssh-keygen ]:

 -V validity_interval
Specify a validity interval when signing a certificate.  A validity interval 
may consist of a single time, indicating that the certificate is valid
beginning now and expiring at that time, or may consist of two times
separated by a colon to indicate an explicit time interval.

The start time may be specified as the string “always” to indicate the
certificate has no specified start time, a date in YYYYMMDD format, a
time in YYYYMMDDHHMM[SS] format, a relative time (to the current
time) consisting of a minus sign followed by an interval in the format
described in the TIME FORMATS section of sshd_config(5).

The end time may be specified as a YYYYMMDD date, a YYYYMMDDHHMM[SS] time,
a relative time starting with a plus character or the string “forever” to indicate that the certificate has no expiry date.

For example: “+52w1d” (valid from now to 52 weeks and one day from now),
“-4w:+4w” (valid from four weeks ago to four weeks from now),
“20100101123000:20110101123000” (valid from 12:30 PM, January 1st,
2010 to 12:30 PM, January 1st, 2011), “-1d:20110101” (valid from yesterday
to midnight, January 1st, 2011).  “-1m:forever” (valid from one minute ago
and never expiring).
1 Like

And then there is this:


(I believe it is true of all distributions, not just RH.)

Hmm… If that is the case, then I wonder what will be able to replace that functional part of moving files around the internet?

BKM :thinking:

rsync, sftp

Once you get comfortable with rsync it really is superior.

(but I do love scp)

Well, I have never tried to use rsync from a bash script. The ‘scp’ tool works well from within a script and that makes the whole Poor Man’s backup concept work like a Swiss clock. It always just works.

I get it that scp does not have all the built-in security features that you might hope to find, but when used with the key pairs, it was a fairly elegant solution to a common problem.

Since I am right now rebuilding a group of 14 servers all sharing the same backup files, I guess it might be wise to play around with rsync to see if I can get it to do what I want.

Thanks for the tip-off to the deprecation notice.


You’re welcome. It’s a pleasure to help prolific contributors like you.


the ‘sftp’ tool did not work out well. Even with a key pair it is still wanting to have the password manually entered. That defeats the purpose of doing the transfer via bash script.

Will work on ‘rsync’ tool next.


Here’s an rsync script I find super handy, that I created a few years ago.

source ./envars.sh;

export TARGET_DIR="${REPO_DIR}";

if [[ -z ${1} ]]; then
  echo -e "Usage: ./rSync y"
  echo -e "Will synchronize this directory with :: ${SERVER}:${TARGET_DIR}";
  echo -e "Synching this directory with :: ${SERVER}:${TARGET_DIR}";

declare PKG="inotify-tools";
dpkg-query -l ${PKG} &>/dev/null || sudo -A apt -y install ${PKG};
declare PKG="tree";
dpkg-query -l ${PKG} &>/dev/null || sudo -A apt -y install ${PKG};

ssh -t ${SERVER} "mkdir -p ${TARGET_DIR}";
echo -e "Target prepared. Synching has begun.";
while inotifywait -qqr -e close_write,move,create,delete ./*; do
  rsync -avx . ${SERVER}:${TARGET_DIR};

Any appropriate changes to any file in its directory will be immediately replicated to the target.

You’ll need to configure all those variables in ./envars.sh of course, but I think you’ll find it pretty self-explanatory.

If it helps, in my ~/.ssh/config file I have aliases for each of my remote installations

# -----------------------------------------------------------------------
# Alias configuration: 'lseh' «begins»»
# Alias 'lseh' binds to remote user 'erpadm@loso.erpnext.host'
Host lseh
  User erpadm
  HostName loso.erpnext.host
  ServerAliveInterval 120
  ServerAliveCountMax 20
  IdentityFile /home/you/.ssh/erpdev_erpnext_host
# Alias configuration: 'lseh' «ends»

So for the ${SERVER} variable above the ./envars.sh file has:

export SERVER=lseh

Ahh… very slick!

I think I will do something like this but without the aliases because sometimes I have to send someone much less experienced to work on problems. Aliases would confound most of the 20yr old linux newbies I have working around the area. :roll_eyes:

As I am reading through the man page for rsync, it seems like I can do nearly the same thing I was doing with ‘scp’ by using the pub keys again.

Setting up some tests now.


Another thing:

${REPO_DIR} in my previous post, does in fact refer to a git repository. I edit locally, and every change gets pushed over, even the git repo control files.

So, in addition, there’s a file in my repo root I call Run On Save.


#!/usr/bin/env bash



declare PKG="inotify-tools";
dpkg-query -l ${PKG} &>/dev/null || sudo apt -y install ${PKG};
declare PKG="tree";
dpkg-query -l ${PKG} &>/dev/null || sudo apt -y install ${PKG};

echo "Will execute : '${EVENT_TASK}'";

function listVariables() {
  echo -e "Variables ::

function doIt() {
  sleep 1;

# listVariables;

while true #run indefinitely
  inotifywait -qqr -e close_write,move,create,delete ${IGNORE_PATHS} ${WATCH_DIRECTORY} && doIt;

You use like this:

./ros.sh ./src "./test_my.sh -a stuff"

So when I save a file from my IDE:

  1. My rsync script pushes only the changes to my remote site.
  2. On the remote site, my ros.sh script sees the change and runs a parametrised script, for example; ./test_my.sh -a stuff (quotes required)

Ahh, yes. I do something similar with the ‘incron’ utility. It looks a lot like cron, but it “reacts” to inbound file transfers and you can control how and when the destination host triggers on the transaction.


Hah! So I reinvented the wheel. Terrific.

Yeah… it seems that you have a whole bunch of stuff going on around and in support of the “inotifywait” function in order to get your stuff done.

With “incron” you just add the tasks to the incrontab list and they handle themselves. The syntax is even similar.


Oh a happier note, it appears your pointer to ‘rsync’ appears to be my solution to the deprecation of ‘scp’

Once I have my servers setup, I will go back and change the Poor Mans tutorial to use rsync instead of scp.

BKM :grin: