5

I'm trying to copy a database from one server to another. I tried using the following command and it's getting tied up on entering the ssh passwords. If I put in one server's password, it complains about the other and vice versa.

ssh root@server1 mysqldump --databases db | ssh root@server2 mysql

I managed to do the transfer by holding the dumps temporarily on my computer, just wondering if there is a way to get this to work.

Drew
  • 151
  • 1
  • 3
  • 1
    The way you intend to do it, would bring the data local first. It would be more efficient to do the pipe on the first remote machine. You also need to set up your ssh agent forwarding, and keys, you really should do this part regardless. – J. M. Becker Feb 28 '12 at 20:35
  • 1
    I'm not terribly competent in the Linux administration arena. The two servers currently can't talk to one another, which is why I'm using my local box as an intermediary. I (now) know I can set up my keys so there's no need for a passphrase but just curious if there's a way around it that doesn't involve changing the server's configuration. – Drew Feb 28 '12 at 20:41

6 Answers6

5

Password prompts are cumbersome. The way to make ssh more usable is to use keys for authentication, and run a key agent (ssh-agent) where you can register a key once per session (with an optional timeout). Then you can directly run

ssh root@server1 mysqldump --databases db | ssh root@server2 mysql

If you can log in from server1 to server2, you should transfer the data directly between the two servers.

ssh root@server1 'mysqldump --databases db | ssh root@server2 mysql'

(or the opposite if you can log in from server2 to server1). Again, have an SSH key for authentication. On your local machine, register an SSH private key that gives you access to server1 and one that gives you access to server2; make sure that agent forwarding is enabled (AgentForwarding yes in ~/.ssh/config).

If you really can't avoid entering a password, your best recourse is to first establish the connection, then go and transfer the data. With recent enough versions of OpenSSH, you can open a master connection, then subsequently route slave connections through it. The slave connections require no extra authentication. In your ~/.ssh/config:

ControlMaster auto
ControlPath ~/.ssh/control:%h:%p:%r

Start a master connection to both servers:

ssh -N -M root@server1 &
ssh -N -M root@server2 &

Then do the copy:

ssh root@server1 mysqldump --databases db | ssh root@server2 mysql

After this you can kill the master connections if you don't need them anymore.

ssh -O exit root@server1
ssh -O exit root@server2
4

The most efficient way to do this, would be piping from the mysqldump server. Sort of like this command list...

ssh root@server1 'mysqldump --databases db | ssh root@server2 mysql'

If you can't pipe from the remote machine, for some configuration related reason, you could do this command list...

ssh root@server1 'mysqldump db' | ssh root@server2 'mysql db'

If I was forced to do the latter, I would consider gziping the mysqldump. I believe this saved me some transfer time, regardless this should not be the first choice.

ssh root@server1 'mysqldump db | gzip -f' | ssh root@server2 'gzip -d | mysql db'

This last example, might not be 100% correct, I'm just assuming it should work.

J. M. Becker
  • 4,891
  • Sorry I didn't include this in the original question but the two servers can't see each other on the network. That's why I'm using my own machine as an intermediary to handle it.

    This is more of just a curiosity rather than any real need.

    – Drew Feb 28 '12 at 20:46
  • The second part is giving me the same problem that my original command did. It asks for both passwords. – Drew Feb 28 '12 at 20:48
  • 3
    You will always be asked for passwords everytime, until you set up your ssh agent forwarding + ssh Keys. edit server sshd.conf, make sure this option is set correctly "AllowAgentForwarding yes". You also need to enable forwarding from ssh client, you can do it with ssh -A or from the ssh.conf. – J. M. Becker Feb 28 '12 at 20:54
2

In case where you are forced to pass from A to B with a proxy-like machine, there's no magic bullet : you'll have to use the proxy, being your computer or an intermediate server.

It can be quite annoying but ssh is so flexible that it can be made transparent. If you install nc program on the proxy, you can add this to your ~/.ssh/config of A :

Host B
  ProxyCommand ssh -q my_proxy nc -q0 B 22

And after that, you will be able, from A, to make classical ssh/scp commands to B like if there were no proxy at all.

So, with this config, you will be able to use sshfs.

sshfs B: /mnt/B_fs

And with sshfs, your dump command can finally look like this :

 mysqldump --databases db ... > /mnt/B_fs/db.dump

Sadly, it won't accelerate bandwidth of the proxy. The best you can do with ssh is to enable Compression and raise CompressionLevel.

See this post for a more detailed explanation about ssh multi-hop.

Coren
  • 5,010
2

As I understand it, the main difficulty you're encountering is that you need to enter two passwords: one for the first ssh connection (writing to the pipe) and one for the second ssh connection (reading from the pipe).

One reasonable way to deal with this (somewhat unreasonable) problem is to set up a temporary password-less key to authenticate from your local account to the remote root accounts. So long as the remove ssh daemon configuration already supports key-based authentication, you can set up password-less login without changing the sshd configuration. With this method however, you do need to edit the authorized_keys list for the remote user.

Here's a script that generates a password-less key, does an interactive login on each remote in order to add key authorization, non-interactively does your mysql copy, then non-interactively removes the authorization, and finally removes the generated key files.

#!/bin/bash
set -e  # bail out if anything fails

verbose=
rmthosts="server1 server2"
rmtuser=root

# generate key with empty passphrase
keyfn=~/.ssh/tmpid_rsa
keycomment="tmpid_$USER@$HOSTNAME"
rm -f $keyfn*
ssh-keygen -t rsa -N '' -C $keycomment -f $keyfn

# add key to remote accounts
authfn=.ssh/authorized_keys
for rmthost in $rmthosts
do
    [ "$verbose" ] && echo "Adding key to $rmtuser@$rmthost:$authfn"
    < $keyfn.pub ssh $rmtuser@$rmthost "cat >>$authfn"
done


# perform your mysql copy
[ "$verbose" ] && echo "Performing operation"
ssh -i $keyfn rcp@gromit hostname | \
    ssh -i $keyfn rcp@gromit "cat >/tmp/prog.out"


# remove the temporary key
[ "$verbose" ] && echo "Removing temporary keys"
for rmthost in $rmthosts
do
    ssh -i $keyfn $rmtuser@$rmthost "sed -i '/$keycomment/d' $authfn"
done
ssh-add -d $keyfn
rm -f $keyfn*
R Perrin
  • 3,049
  • 20
  • 11
2

The key solutions said are the best things to go. But if you, for any reason, cannot do that, you could modify your cmdline to

ssh root@server1 mysqldump --databases db | {sleep 5; ssh root@server2 mysql; }

in order to have the 2nd command wait. Adjust the 5 according to how fast you type in the 1st password.

Be aware that this is just a fallback solution and that the key solutions are by far better.

glglgl
  • 1,210
0

While the key management solutions are great and the way to go, i have another way to do it without: Use named pipes and two shells.

In the first shell type

mkfifo ~/db_dump
cat db_dumb | ssh root@server2 mysql

and enter the second password.

In the other shell type

ssh root@server1 mysqldump --databases db >~/db_dump

and enter the first password.

When the dump is done:

rm ~/db_dump

in either shell.

I think you could start the dump first and the restore second. Should not make too much difference.

ospalh
  • 196