2

I love to do just this:

$ sshfs mountPoint myServer
$ cp thisFile mountPoint

I am now using LiveCD and I do not have sshfs utility available and I need run $ sudo dd /dev/sdb2 > mountPoint, how to do this as easy as with sshfs?

Perhaps related

  1. https://superuser.com/questions/397646/cloning-fresh-windows-7-fsed-hdd-to-linux-server-because-having-no-external-hdd

Comment to Psusi

$ sudo fdisk -l|tail
255 heads, 63 sectors/track, 4864 cylinders, total 78142806 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x181d6d22

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048     3074047    12288000    7  HPFS/NTFS/exFAT
/dev/sdb2         3074048   600563711  2389958656    7  HPFS/NTFS/exFAT
/dev/sdb3       600563712   625139711    98304000    7  HPFS/NTFS/exFAT
$ sudo file -s /dev/sdb
/dev/sdb: x86 boot sector; partition 1: ID=0x7, active, starthead 32, startsector 2048, 3072000 sectors; partition 2: ID=0x7, starthead 89, startsector 3074048, 597489664 sectors; partition 3: ID=0x7, starthead 254, startsector 600563712, 24576000 sectors, code offset 0xe
$ sudo ntfsclone --save-image --output - /dev/sdb2
ntfsclone v2011.4.12AR.4 (libntfs-3g)
ERROR(22): Opening '/dev/sdb2' as NTFS failed: Invalid argument
Apparently device '/dev/sdb2' doesn't have a valid NTFS. Maybe you selected
the whole disk instead of a partition (e.g. /dev/hda, not /dev/hda1)?
Rui F Ribeiro
  • 56,709
  • 26
  • 150
  • 232
  • 1
    FYI, if this is a windows partition you are trying to clone, use ntfsclone instead of dd. It is smart enough to skip the trash in the unused/free sectors, and can compress the image. – psusi Mar 15 '12 at 19:04
  • @psusi: could you tell more in an answer? Yes, this is a w -partitition (actually the whole harddrive is from fresh w machine). I tried this ntfsclone --save-image --output - /dev/sdb3 | bzip2 | ssh m@m.com 'cat > 15032012_w7_3.img.bz2' but getting no content?! –  Mar 15 '12 at 19:31
  • Did you run ntfsclone as root ( sudo )? – psusi Mar 15 '12 at 19:44
  • @psusi: yes, look at the updated question. I get error 22 when trying to execute the command. –  Mar 16 '12 at 01:07
  • Looks like that partition either doesn't actually contain an ntfs filesystem or it is damaged. What does sudo blkid say the type is? – psusi Mar 16 '12 at 02:29
  • @psusi: it finds only /dev/sda1 and /dev/sda5, nothing about /dev/sdb.*. $ sudo fdisk -l lists though /dev/sdb.* as shown above. –  Mar 16 '12 at 02:38
  • Try sudo blkid -p /dev/sdb – psusi Mar 16 '12 at 02:43
  • @psusi: nothing again. –  Mar 16 '12 at 02:44

3 Answers3

4

To answer your question directly

dd if=/dev/sdb2 ibs=1M | ssh -C myServer 'dd of=/path/to/destination obs=1M'

For bonus you can do the following to see the progress (assuming you have the pv utility)

pv /dev/sdb2 | ssh -C myServer 'dd of=/path/to/destination obs=1M'
phemmer
  • 71,831
  • 1
    Bah, jofel beat me to it. Still leaving though as the answer is slightly different. Also invoking pv in the method I provided lets pv give you progress of how much is left (including time estimates), instead of just how much has transferred so far. – phemmer Mar 15 '12 at 19:32
  • why ibs=1M and obs=1M? –  Mar 16 '12 at 01:26
  • 1
    Larger block sizes (over the default for dd) improve IO to/from physical volumes. While the the value jofel gave (64k) is probably fine, memory is cheap and I usually just set it at 1m. – phemmer Mar 16 '12 at 08:29
  • @Patrick I've done some measurements. See my comment on my answer. I measure with obs=1M 1.8 GB/s, with obs=16k 2.8 GB/s from /dev/zero to /dev/null over a pipe. But in our case the queue is usually not the limiting factor, so the discussion is just theoretical... – jofel Mar 16 '12 at 09:31
  • 1
    @jofel piping to /dev/null is not a valid test. Its size of the blocks to/from the physical volume that matters. – phemmer Mar 16 '12 at 09:32
  • @Patrick I wrote at the end of my comment, that the discussion is here just theoretical. The maximum speed over a pipe with dd does only matter if the possible throughput of the programs on both sides of the pipe is is higher than the throughput of the pipe. That could be the case if you read/write to ramdisks or SSDs. You cannot transfer more than 64k at once over a pipe without (automatic) waiting for the other program. The optimal buffer size used to write/read from pipes is independent of the best block size for any other writing/reading of the programs connected to the pipe. – jofel Mar 16 '12 at 11:10
  • 1
    +1 because of using dd's default stdin instead of cat. Much better. – Warren Young Mar 16 '12 at 20:44
2

You can use a ssh tunneled pipe:

dd if=/dev/sdb2 ibs=1M obs=64k | ssh -C user@remotehost "cat > /path/to/destination"

The -C option enables compression in the ssh protocol which usually improves the performance in cases like this.

If you have pv installed, you can include it in the pipe to get more information how much already is transferred.

dd if=/dev/sdb2 ibs=1M obs=64k | pv | ssh -C user@remotehost "cat > /path/to/destination"
jofel
  • 26,758
  • why bs=64k? Knoppix LiveCD misses the pv. This requires 36 hours at least to run on sparse ntfs -fs. –  Mar 16 '12 at 01:25
  • 1
    bs gives the block size for the transfer. AFAIK, the pipe buffer is 64k big. This is the reason for bs=64k. It maybe better to use e.g. ibs=1M as in Patrix answer for fast disk reading and obs=64k for good pipe performance. See (here)[http://unix.stackexchange.com/a/11954/15241] for more information. You can install programs in Knoppix based LiveCD systems if you have Internet access. Simply: apt-get update and then apt-get install pv. – jofel Mar 16 '12 at 08:12
  • You mean that pipes will fail if you send 50GB with it? How can I see that it fails? –  Mar 16 '12 at 08:15
  • 2
    No, they do not fail. A pipe is limited queue between programs. On Linux, it has 64k. If the queue is full, the writing program on the one side is automatically blocked until the program on the other side reads data from the pipe. Therefore it make sense to write in blocks which fits into the queue. I've done now some pipe speed measurements. I see no big difference between values between obs=64k and obs=8k. Without any proof, it seems that obs=16k works best for me. But in our case, network or disk speed are the limiting factor, so obs= is not really important if it is not too small. – jofel Mar 16 '12 at 09:23
  • ..yes but what about if I do not specify the setting? –  Mar 16 '12 at 09:36
  • @hhh Then the default 512 byte block size is used. See the man page of dd. – jofel Mar 16 '12 at 10:13
0

I would suggest you to use scp which comes with every Linux Distribution. It is called secure copy.

$ scp -r folder-to-copy location-of-copy

  • 1
    ...this is about cloning /dev/sdb2 (it is a harddrive), it is not a traditional dir or? –  Mar 15 '12 at 19:11