11

How can I move files up a directory where there might be hundreds or thousands of files in the directory, and you might not be sure about whether there are dupes in ... What method would you use?

How to handle dupes will vary, sometimes we'll overwrite, sometimes we need to be safer. IO can be important because these are production servers. But given quantity a prompt for non duplicate files isn't an option. Preservation of permissions, and timestamps, etc, is important. We usually won't know what the data is.

Oh and it using mv isn't required, rsync, cp solutions welcome.

note: we're running CentOS 5.5 so let me know if it won't work there due to it being a more recent... feature

xenoterracide
  • 59,188
  • 74
  • 187
  • 252
  • What would ypu want to do about dupes ? –  Dec 17 '10 at 15:51
  • @Iain depends on the situation? I work at a webhost... it really depends on the mv and whether we care about preservation in the case of... and whether we've already made a backup. This question is fairly open. I'm just looking for good options, and maybe a comment about whether or not it can bite you and how. – xenoterracide Dec 17 '10 at 15:54
  • I assume there's too many files for the shell to handle mv * ../ or mv -i * ../? – Michael Mrozek Dec 17 '10 at 15:54
  • @Michael I'm sure it varies... in some cases probably not. In some cases maybe. I'm trying to catch as many options for my work env as possible. – xenoterracide Dec 17 '10 at 15:58

7 Answers7

7

I would recommend using rsync from the parent:

rsync -avPr -b --suffix='-original' child/* .

which will backup all existing duplicate files in parent to file-original.

Tok
  • 10,744
  • I think -a implies -r – xenoterracide Dec 17 '10 at 17:26
  • @xenoterracide - Right you 'r' – Tok Dec 17 '10 at 17:42
  • @Tok is it possible to do the copy's as hardlinks? so as not to waste IO. (like -l for cp) – xenoterracide Dec 18 '10 at 12:13
  • @xenoterracide - You can use the -H or --hard-links flags for rsync to preserve hard links. – Tok Dec 20 '10 at 14:14
  • @Tok yeah but that doesn't copy as hardlinks, it just preserves existing ones. I'm thinking of a copy like my cp -bal does – xenoterracide Dec 20 '10 at 14:20
  • 1
    @xenoterracide - You can use the --link-dest=DIR flag to achieve this behavior as: rsync -avP --link-dest=/path/to/src /path/to/src/* /path/to/dest/ which will hard link into dst/ any unchanged files between src and src, in this case all files. Ordinarily you see this flag used when you wish to re-link backup files without copying their data such as --link-dest=/most/recent/backup. – Tok Dec 20 '10 at 14:46
4
cp -bal . ..

This will copy everything in the current directory to the directory above it, retaining all permissions, using hardlinks to minimize IO if possible, and on duplicates it creates filename~

after that

rm -rf . ; cd .. ; rmdir <originaldir>;
xenoterracide
  • 59,188
  • 74
  • 187
  • 252
2

In this example will move files from '/parent/old-dir' to '/parent':

cd /parent

rsync -av --progress old-dir/ .

rm -rf old-dir

By rsync rules it will replace dups with newer files from old-dir.

Alexander Pogrebnyak
  • 1,095
  • 1
  • 11
  • 16
1

You can try

find . -maxdepth 1 -print0 | xargs -I '{}' -r0 mv '{}' ..

which will overwrite dupe files in ..

You can use mv -u '{}' to not overwrite if the dupe in .. is the same or newer

1

mv -i only prompts if the destination exists.

yes n | mv -i … moves all files that don't exist in the destination directory. On FreeBSD and OSX, you can shorten this to mv -n ….

Note that neither of these will merge a directory argument with an existing directory in the same name in the destination directory.


A separate issue is how to act on all the files in the current directory. There are two problems: grabbing all files (* omits dot files) and not running into a command line . On Linux (or more generally with GNU find and GNU coreutils):

find . -mindepth 1 -maxdepth 1 -exec mv -i -t .. -- {} +

With GNU find but not GNU coreutils (or older GNU coreutils):

find . -mindepth 1 -maxdepth 1 -exec sh -c 'mv -i -- "$@" "$0"' .. {} +

Portably:

find . -name . -o -exec sh -c 'mv -i -- "$@" "$0"' .. {} -type d -prune

As usual zsh makes things easier. It doesn't have a command line length limitation internally, so if you use its mv builtin you don't need to worry about that. And you can tell it not to ignore dot files with the D glob qualifier. Limitation: this doesn't work across filesystems (as of zsh 4.3.10).

zmodload zsh/files
mv -i -- *(D) ..
0

I said on our ML

mv * ..

obviously this isn't very safe... it will overwrite things. It might have limits that I've never run into.

xenoterracide
  • 59,188
  • 74
  • 187
  • 252
0

The following is a python template that I have used to good effect in the past.

#!/usr/bin/env python
#
# Bart Kastermans, www.bartk.nl
#
# rename of collection of files in a directory
import os
import shutil

# only work on files whose name starts with a D
files = [filename for filename in os.listdir ("/Users/kasterma/Music/Audio Hijack/") if filename[0] =="D"]

for filename in files:
    shutil.move (filename, filename [:23] + ".mp3")
kasterma
  • 729