...
37M total
29M total
42M total
43M total
36M total
real 0m1.271s
user 0m0.561s
sys 0m1.278s
Is what I get with:
time find ~/sda1 -type f -exec du -ch {} +|grep total
So now I need a tool to sum up the totals! (This is a -exec...+
overflow)
But with:
time find ~/sda1 -type f -printf "%s\n" | awk '{a+=$1;} END {print a;}'
10483650002
real 0m0.550s
user 0m0.251s
sys 0m0.349s
And:
]# time du ~/sda1 -sh
11G /.../sda1
real 0m0.458s
user 0m0.116s
sys 0m0.340s
I get it nice and fast.
It seems inefficient to du
each file, when find is stat
ing them anyway, and can give the size for free. With find ... -exec du {} +
, du
is degraded to a calculator of -c "grand total".
There is of course some difference between file size (in bytes) and disk usage (in blocks).
Here just to show that the original `find ... -printf "%s\n" | awk '{...} END {...}' works:
]# find ~ -maxdepth 1 -printf "%s\n" | awk '{a+=$1;} END {print a;}'
1093990
]# find ~ -maxdepth 1 -printf "%s\n" | awk '{a+=$1;} END {printf "%x\n",a;}'
10b166
This is my first awk
ever.
I tested this on ~ -maxdepth 1
, and the round number struck me, and that "GB" thing at the end of OP, so I played around until I got 10**6 = 16x64K.
find -type f ... |awk
andfind -type d ... du {}
both make sense – Nov 20 '19 at 16:43