I have a directory with thousands of files. How can I move 100 of the files (any files will do) to another location.
15 Answers
for file in $(ls -p | grep -v / | tail -100)
do
mv $file /other/location
done
That assumes file names don't contain blanks, newline (assuming the default value of $IFS
), wildcard characters (?
, *
, [
) or start with -
.

- 544,893

- 804
-
15Note that this approach only works if there are no special characters (whitespace, nonprintable characters, ``) in the file names. As a general rule, do not parse the output of
ls
. And always double-quote parameter and command substitutions. – Gilles 'SO- stop being evil' May 11 '11 at 07:20 -
1
-
4Update to my previous comment: After reading Gilles' reference link, do not parse the output of ls, I've found that my
find
command was lacking. An arg was in the wrong place, and I've added null file-name endings. It is a bit long of a single line, but that's all I can do in a comment. Here is the fixed snippet:find . -maxdepth 1 -type f \( ! -iname ".*" \) -print0 | while read -rd $'\0' file ; do mv -- "$file" /other/location/ ; done
– Peter.O May 11 '11 at 12:31 -
@perer Just as a note that the
read -d
option is not portable to all shells, but if you are usingbash
anyways,-d ''
should get you the same effect as-d $'\0'
. – jw013 Dec 26 '11 at 22:59 -
FWIW if you want this to run as a oneliner, add a
;
where each newline is now. – patrick Jun 07 '19 at 12:33
It's easiest in zsh:
mv -- *([1,100]) /other/location/
This moves the first 100 non-hidden files (of any type, change ([1,100])
to (.[1,100])
for regular files only, or (^/[1,100])
for any type but directory) in name lexicographic order. You can select a different sort order with the o
glob qualifier, e.g. to move the 100 oldest files:
mv -- *(Om[1,100]) /other/location/
With other shells, you can do it in a loop with an early exit.
i=0
for x in *; do
if [ "$i" = 100 ]; then break; fi
mv -- "$x" /other/location/
i=$((i+1))
done
Another portable way would be to build the list of files and remove all but the last 100.

- 544,893

- 829,060
-
+1 for safe shell expansion. Would also be more readable with the increment operation
$(( i++ ))
or$[ i++ ]
? – Jan 16 '12 at 12:46 -
2@hesse Some shells don't implement
++
and--
. You can write: $((i+=1))
instead ofi=$((i+1))
; I'm not convinced that it's more readable. – Gilles 'SO- stop being evil' Jan 16 '12 at 18:18 -
1:-D, I actually edited this answer thinking it was mine... Sorry. Feel free to revert as that changes the meaning. – Stéphane Chazelas Jan 20 '15 at 18:08
-
@StéphaneChazelas I do wonder why you'd exclude directories and symlinks, the question said nothing about that. – Gilles 'SO- stop being evil' Jan 20 '15 at 19:02
-
@Gilles, the accepted answer has
ls -p | grep -v /
so has the recent question that dups here. – Stéphane Chazelas Jan 20 '15 at 19:24 -
1@Manuel I don't see why anything would be different on macOS. What's wrong? – Gilles 'SO- stop being evil' Jul 08 '20 at 12:19
The following worked for me. Sorry if it was posted previously, but I did not see it in a quick scan.
ls path/to/dir/containing/files/* | head -100 | xargs -I{} cp {} /Path/to/new/dir

- 111
-
This was my first thought when I needed to do something similar, but I didn't know how to specify the stdin parameter to
xargs
, so your answer taught me something. Thanks! – Fixee Dec 14 '21 at 16:00 -
If you're not using zsh:
set -- *
[ "$#" -le 100 ] || shift "$(($# - 100))"
mv -- "$@" /target/dir/
Would move the last (in alphabetical order) 100 ones.
Brief explanation:
- The asterisk expands to all non-hidden files in the current directory (in alphabetical order) and
set
assigns them to the positional parameters:$1
,$2
, ... etc $#
is the number of parameters, so we test if that is <= 100 and if so, we're done, otherwise we shift (i.e., remove) all but 100 parameters from the parameter list by computing$# - 100
- The remaining 100 (or fewer) parameters are stored in
$@
and are moved to/target/dir/

- 544,893
To move a hundred of them at random:
shuf -n 100 -e * | xargs -i mv {} path-to-new-folder
That assumes none of the file names contain quotes or backslashes or start with blanks or -
though. And since mv
's stdin is now a pipe, the prompts it may issue for user confirmation will not work. A more correct, reliable and efficient approach, with shells with support for ksh-style process substitution would be:
xargs -r0a <(shuf -zn100 -e -- *) mv -t path-to-new-folder --

- 544,893

- 351
The following oneliner in shell would help.
foreach i (`find Source_Directory -type f --max-depth 1|tail -100`); do; {mv $i Target_Directory}; done

- 367
-
1
-
@phk, that would happen to work in
zsh
even if at first sight that looks quite alien tozsh
syntax. Gilles has shown a much simpler way to do it inzsh
. Even then, that's still more reliable than the currently accepted answer. – Stéphane Chazelas Sep 05 '18 at 22:32
the following command works, if you are interested in using ls
$ ls -rtd source/* | head -n100 | xargs cp -t destination
How does this work ??
ls -rtd source/*
- command lists all the files with the relative path from oldest to newesthead -n100
- takes first 100 filesxargs cp -t destination
- moves these files into the destination folder
Note that it assumes GNU cp
and that file names don't contain blanks, newlines, quotes or backslash characters.
A more foolproof alternative with recent versions of GNU utilities would be:
ls --zero -rtd -- source/* | head -zn100 | xargs -r0 cp -t destination --
(--
s not necessary here as file paths start with source/
, but would be if you changed source/*
to *.txt
or *
for instance).

- 544,893
-
This is a good answer. OP was asking how to move, not copy. But I believe this answer should also work by replacing the
cp
command with themv
command. They both support the-t
flag. – Matt Popovich May 19 '22 at 03:57
#!/bin/bash
c=1; d=1; mkdir -p NEWDIR_${d}
for jpg_file in *.jpg
do
if [ $c -eq 100 ]
then
d=$(( d + 1 )); c=0; mkdir -p NEWDIR_${d}
fi
mv "$jpg_file" NEWDIR_${d}/
c=$(( c + 1 ))
done
try this code

- 379
- 6
- 14
mmv is an outstanding utility which will also allow you to do mass renaming of files. (I had to sudo apt-get install mmv
on my computer to install it.) Simple usage example: suppose you have a directory of files with extension .JPG that you'd like to change to a lowercase .jpg. The following command does the trick:
mmv \*.JPG \#1.jpg
The backslash is used to show a wildcard is coming up. The */JPG matches anything with a JPG extension. In the "to" portion of the command, the #1 uses the matching text from the first wildcard to rename the file. Of course, you can put a different path before the #1 to also move the file.

- 163
-
2It would be more beneficial if you provided how you would actually use the tool you suggest to accomplish the goal. – Dason Dec 30 '11 at 05:55
-
1
If you want to be safe / handle filenames with spaces, newlines, quotes, backslashes etc. in them, you have to use null-terminated separators:
find "$srcdir" -maxdepth 1 -type f -print0 | head -z -n 100 | xargs -0 -r -- mv -t "$destdir" --
EDIT2: NOTE: if you don't have head -z
(for whatever reason) you can replace the above head -z -n 1000
with tr '\0\n' '\n\0' | head -n 1000 | tr '\0\n' '\n\0'
(or see other ways)
-maxdepth 1
will avoid looking for files in subdirectories of $srcdir
, so the only ones listed are the files within $srcdir
.
-print0
will use \0
instead of newline(\n
) between each listed file - this helps handle files containing newlines and spaces with xargs.
head -z
will count \0
terminated (instead of newline(\n
) terminated) lines as lines. -n 100
will list only the first 100
files that find
found.
If you want to see what command xargs
will execute, add -t
(or --verbose
).
xargs -0
"Input items are terminated by a null (\0
) character instead of by whitespace, and the quotes and backslash are not special (every character is taken literally)"
xargs -r
will not run mv
if there are no files to be moved (ie. if find
did not find any files).
--
terminates processing of arguments as options to the program, more details here
Sample output (runs one mv
command and can handle files with newlines in their name too):
$ find /tmp/t -maxdepth 1 -type f -print0 | head -z -n 100 | xargs -t -0 -r -- mv -t /tmp -- ; echo "exit codes: ${PIPESTATUS[@]}"
mv -t /tmp -- /tmp/t/file containing quotes"' then spaces /tmp/t/file containing quotes"' /tmp/t/file containing a slash n here\n /tmp/t/file containing a new line here
and continues /tmp/t/s /tmp/t/-x and -L 1. /tmp/t/of replace-str in the initi /tmp/t/-thisfile_starts_with_a_hyphen and has spaces and a -hyphen here /tmp/t/-thisfile_starts_with_a_hyphen and has spaces /tmp/t/-thisfile_starts_with_a_hyphen /tmp/t/another with spaces /tmp/t/one with spaces /tmp/t/c /tmp/t/a
exit codes: 0 0 0
$ ls -1R /tmp/t
/tmp/t:
a
'another with spaces'
b
c
'file containing a new line here'$'\n''and continues'
'file containing a slash n here\n'
'file containing quotes"'\'''
'file containing quotes"'\'' then spaces'
'of replace-str in the initi'
'one with spaces'
s
'some dir'
-thisfile_starts_with_a_hyphen
'-thisfile_starts_with_a_hyphen and has spaces'
'-thisfile_starts_with_a_hyphen and has spaces and a -hyphen here'
'-x and -L 1.'
/tmp/t/b:
'file with spaces'
'/tmp/t/some dir':
'some file'
For find
:
-maxdepth levels
Descend at most levels (a non-negative integer) levels of direc‐
tories below the starting-points. -maxdepth 0
means only apply the tests and actions to the starting-points
themselves.
-type c
File is of type c:
b block (buffered) special
c character (unbuffered) special
d directory
p named pipe (FIFO)
f regular file
l symbolic link; this is never true if the -L option or the
-follow option is in effect, unless the symbolic link is
broken. If you want to search for symbolic links when -L
is in effect, use -xtype.
s socket
D door (Solaris)
-P Never follow symbolic links. This is the default behaviour.
When find examines or prints information a file, and the file is
a symbolic link, the information used shall be taken from the
properties of the symbolic link itself.
-L Follow symbolic links. When find examines or prints information
about files, the information used shall be taken from the prop‐
erties of the file to which the link points, not from the link
itself (unless it is a broken symbolic link or find is unable to
examine the file to which the link points). Use of this option
implies -noleaf. If you later use the -P option, -noleaf will
still be in effect. If -L is in effect and find discovers a
symbolic link to a subdirectory during its search, the subdirec‐
tory pointed to by the symbolic link will be searched.
When the -L option is in effect, the -type predicate will always
match against the type of the file that a symbolic link points
to rather than the link itself (unless the symbolic link is bro‐
ken). Actions that can cause symbolic links to become broken
while find is executing (for example -delete) can give rise to
confusing behaviour. Using -L causes the -lname and -ilname
predicates always to return false.
For head
:
-n, --lines=[-]NUM
print the first NUM lines instead of the first 10; with the
leading '-', print all but the last NUM lines of each file
-z, --zero-terminated
line delimiter is NUL, not newline
EDIT: Someone mentioned that they didn't have head -z
, this is the version that I was using(in Fedora 25):
$ head --version
head (GNU coreutils) 8.25
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by David MacKenzie and Jim Meyering.
$ rpm -qf /usr/bin/head
coreutils-8.25-17.fc25.x86_64
For xargs
:
-0, --null
Input items are terminated by a null character instead of by
whitespace, and the quotes and backslash are not special (every
character is taken literally). Disables the end of file string,
which is treated like any other argument. Useful when input
items might contain white space, quote marks, or backslashes.
The GNU find -print0 option produces input suitable for this
mode.
-r, --no-run-if-empty
If the standard input does not contain any nonblanks, do not run
the command. Normally, the command is run once even if there is
no input. This option is a GNU extension.
-P max-procs, --max-procs=max-procs
Run up to max-procs processes at a time; the default is 1. If
max-procs is 0, xargs will run as many processes as possible at
a time. Use the -n option or the -L option with -P; otherwise
chances are that only one exec will be done. While xargs is
running, you can send its process a SIGUSR1 signal to increase
the number of commands to run simultaneously, or a SIGUSR2 to
decrease the number. You cannot increase it above an implemen‐
tation-defined limit (which is shown with --show-limits). You
cannot decrease it below 1. xargs never terminates its com‐
mands; when asked to decrease, it merely waits for more than one
existing command to terminate before starting another.
Please note that it is up to the called processes to properly
manage parallel access to shared resources. For example, if
more than one of them tries to print to stdout, the ouptut will
be produced in an indeterminate order (and very likely mixed up)
unless the processes collaborate in some way to prevent this.
Using some kind of locking scheme is one way to prevent such
problems. In general, using a locking scheme will help ensure
correct output but reduce performance. If you don't want to
tolerate the performance difference, simply arrange for each
process to produce a separate output file (or otherwise use sep‐
arate resources).
-t, --verbose
Print the command line on the standard error output before exe‐
cuting it.
For cp
:
-t, --target-directory=DIRECTORY
copy all SOURCE arguments into DIRECTORY
-v, --verbose
explain what is being done
I came by here, but I was needing to copy files in parts (99 each) from /DIR1
to /DIR2
. I'll paste the script here to help otherz maybe:
#!/bin/bash
# Thanks to <Jordan_U> @ #ubuntu
# 06 Dec 2014
i=0
copy_unit=98
for file in /DIR1/*; do
cp "$file" /DIR2
if [[ "$i" -ge "$copy_unit" ]]; then
echo "Pausing, press enter to continue"
read
i=0
fi
((i++))
done

- 103
Try this:
find /source/directory -type f -maxdepth 1 -print | tail -100 | xargs -J % mv % /other/location/
Note: If you are using GNU xargs, then replace -J with -I 'xargs' option.

- 31
-
This is incorrect, you're passing three arguments to
mv
, the last one of which (probably) isn't a directory. And it doesn't really answer the question - the asker wants to move a given number of files, not all of them. – Mat Dec 30 '11 at 09:13 -
-
-
@navjotk It seems you have GNU xargs on your machine, please replace -J option with -I. – Saumil Mar 28 '21 at 17:08
With a Perl one-liner (using \0 NULL byte
as file separator, handling files with newlines, spaces or starting with a dash, and quitting on error with explicit message):
count=100
perl -Mautodie -se '
my @files = <*>;
rename($_, "./other/dir/$_") for @{files[0..$count]}
' -- -count="$count"

- 33,867
Another variation, inspired by https://unix.stackexchange.com/a/105042/66736 :
cp `ls -d ./* | head -n 100` tmpi
It may not be the fastest or the most elegant way, but it is a way you can keep in memory.

- 197
I know this thread is a pretty old, but I found the answers more complicated than I thought they should be. This worked in CentOS, but it seems simple enough that it should probably work in other distros.
cp `ls someDir | head -n 100` someDir100/

- 1
-
1That doesn't work because the output of
ls
won't include the leadingsomedir/
prefix, and won't work for filename with blank or wildcard characters or start with-
. – Stéphane Chazelas Nov 28 '12 at 14:38 -
Fair enough. I actually did cp
ls | head -n 100
../someDir100/ From inside the target directory and none of the file names satisfied those cases. Better to be lucky then good! – Jason Nov 28 '12 at 15:20
about.com
and some other website for the list of options available that I can possibly use.. but found nothing liketail
– gaijin May 11 '11 at 01:53