I have a text file, "foo.txt", that specifies a directory in each line:
data/bar/foo
data/bar/foo/chum
data/bar/chum/foo
...
There could be millions of directories and subdirectories What is the quickest way to create all the directories in bulk, using a terminal command ?
By quickest, I mean quickest to create all the directories. Since there are millions of directories there are many write operations.
I am using ubuntu 12.04.
EDIT: Keep in mind, the list may not fit in memory, since there are MILLIONS of lines, each representing a directory.
EDIT: My file has 4.5 million lines, each representing a directory, composed of alphanumeric characters, the path separator "/" , and possibly "../"
When I ran xargs -d '\n' mkdir -p < foo.txt
after a while it kept printing errors until i did ctrl + c:
mkdir: cannot create directory `../myData/data/a/m/e/d': No space left on device
But running df -h
gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda 48G 20G 28G 42% /
devtmpfs 2.0G 4.0K 2.0G 1% /dev
none 401M 164K 401M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 0 2.0G 0% /run/shm
free -m
total used free shared buffers cached
Mem: 4002 3743 258 0 2870 13
-/+ buffers/cache: 859 3143
Swap: 255 26 229
EDIT: df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda 2872640 1878464 994176 66% /
devtmpfs 512053 1388 510665 1% /dev
none 512347 775 511572 1% /run
none 512347 1 512346 1% /run/lock
none 512347 1 512346 1% /run/shm
df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/xvda ext4 49315312 11447636 37350680 24% /
devtmpfs devtmpfs 2048212 4 2048208 1% /dev
none tmpfs 409880 164 409716 1% /run
none tmpfs 5120 0 5120 0% /run/lock
none tmpfs 2049388 0 2049388 0% /run/shm
EDIT: I increased the number of inodes, and reduced the depth of my directories, and it seemed to work. It took 2m16seconds this time round.
df -i
from before or after you try to runxargs -d '\n' mkdir -p < foo.txt
? – PM 2Ring Dec 15 '14 at 12:28df -T /
)? – Stéphane Chazelas Dec 15 '14 at 12:30