17

I'm trying to run a command similar to the one below in a bash script. It should search through all subfolders of $sourcedir and copy all files of a certain type to the root level of $targetdir.

#!/bin/bash

# These are set as arguments to the script, not hard-coded
sourcedir="/path/to/sourcedir"
targetdir="/path/to/targetdir"

find "$sourcedir" -type f -name "*.type" -exec sh -c 'cp "$1" "$2/`basename "$1"`"' "{}" "$targetdir" \;

This seems to be pretty close, except that {} isn't being passed in as $2 to -exec sh -c ...

I would like to do this as close to the "right way" as possible, with tolerance for special characters in filenames (specifically single quote characters).

Edit: I see people suggesting using xargs or argument chaining. I was under the impression that this is only ok for a limited number of arguments. If I have, for example, thousands of .jpg files I'm trying to copy from a number of gallery directories into a giant slideshow directory, will the solutions chaining arguments still work?

Edit 2: My problem was I was missing a _ before my first option to sh in the -exec command. For anyone who is curious about how to make the find command work, add the _ and everything will be fine:

find "$sourcedir" -type f -name "*.type" -exec sh -c 'cp "$1" "$2"' _ "{}" "$targetdir" \;

I have accepted an answer below though because it accomplishes the same task, but is more efficient and elegant.

7ochem
  • 141
Matthew
  • 1,061
  • 4
    That is exactly why xargs was ever created, to automatically handle huge amounts of arguments on regular commands that had limits. Also to consider, most of the max argument limits, have been vastly improved for the standard GNU utils. You also will see a performance benefit, avoiding all those process forks, which on thousands of files is relevant. – J. M. Becker Feb 02 '12 at 20:08
  • With gnu-find and + instead of ";", you can handle multiple arguments at once with find too. And you save the complicated argument passing with -print0. – user unknown Feb 02 '12 at 23:45
  • @userunknown: I'm responding to this below your answer. – J. M. Becker Feb 03 '12 at 00:32
  • @user unknown Well, I do LOVE this code. It's at least fully POSIX-compliant and will work without any GNU stuff on the machine at all. There are those times when you do need this, especially on servers at work. – syntaxerror Dec 14 '14 at 22:22

4 Answers4

6

You need to pass {} as an argument to the shell then loop over each arg.

find "$sourcedir" -type f -name "*.type" -exec sh -c 'for f; do cp "$f" "$0"; done' "$targetdir" {} +

Note: The way this works is that the first arg to a shell is the name of the shell, we can exploit this by passing the name as the $targetdir and then use the special parameter $0 inside the shell script to access that targetdir.

SiegeX
  • 8,859
6

You want to copy files of a specific type, to a specific directory? This is best done with xargs, and you don't even need that sh. This is a more appropriate way to do it, should also run more efficiently.

find "$sourcedir" -type f -name "*.type" | xargs cp -t targetdir

If you need to handle special file names, then use NULL as your seperator

find "$sourcedir" -type f -name "*.type" -print0 | xargs -0 cp -t "$targetdir"
J. M. Becker
  • 4,891
6

If you don't believe in the church of xargs:

find "$sourcedir" -type f -name "*.mp3" -exec cp -t "$targetdir" {} +

Explanation:

cp -t a b c d 

copies b, c and d to the target dir a.

-exec cmd {} +

invokes the command on a big bunch of files at once, not one after the other (which is standard, if you use ";" instead of +) . This is why we have to pull the targetdir to the front and mark it explicitly as target.

This works for gnu-find and might not for other implementations of find. Of course it relies on the -t -flag too.

TechZilla is of course in so far right, as sh isn't needed to invoke cp.

If you don't use xargs, which is most of the time needless in combination with find, you're saved from learning the -print0 and -0 flag.

user unknown
  • 10,482
  • 1
    Obvisouly, which method anyone might prefer is somwhat subjective. With that said, there does exist reasons to still use xargs. Biggest example, what if you are not finding files? I use xargs in place of general for loops all the time, find is much smaller in scope. In addition, find only supports that + if you are using GNU find, it's not defined in POSIX. So while you should feel free to prefer find alone, It does not do everything xargs does. When you consider GNU xargs you also get -P which does multi-core. Its worth learning xargs regardless. – J. M. Becker Feb 03 '12 at 00:38
  • Subjectivity speaking, my opinion obviously does differ. Objectively on the other hand, your answer is also correct. It is one of the few best solutions available. – J. M. Becker Feb 03 '12 at 00:43
  • @TechZilla: I hope to remember to revisit this site, when find starts supporting parallel invocation. :) In most cases with copying/moving, disk speed will be the limiting factor, but SSDs might change the picture. You're right, that a non-GNU-solution to provide is a fine thing. Else, the find-solution is objectively shorter, simpler, and uses only two processes. – user unknown Feb 03 '12 at 01:06
0
read -p "SOURCE: " sourcedir
read -p "TYPE: " type
read -p "TARGET: " targetdir
find -L $sourcedir -iname "*.$type" -exec cp -v {} $targetdir \;
HalosGhost
  • 4,790