How to know the size of a directory? Including subdirectories and files.
-
3Does this answer your question? How do I get the size of a directory on the command line? – StayOnTarget Dec 07 '19 at 16:55
12 Answers
du -s directory_name
Or to get human readable output:
du -sh directory_name
The -s
option means that it won't list the size for each subdirectory, only the total size.

- 5,597
-
13Actually
du
's default unit is 512-byte blocks according to POSIX, and kilobytes on Linux (unless the environment variablePOSIXLY_CORRECT
is set) or withdu -k
. – Gilles 'SO- stop being evil' Oct 12 '10 at 17:49 -
3@Gilles: Good catch. I've removed the "number of bytes" bit from my answer. – sepp2k Oct 12 '10 at 17:53
-
1
-
1if the directory is very big and have lots of subdirectories, it takes lots of time... almost 1 min.. is that normal? is there a way to get the size more rapidly? – yeahman Oct 15 '15 at 19:59
-
4I needed to calculate the size of my folder "bag",
du -sh bag
worked perfectly! – António Almeida Mar 04 '16 at 12:15
While using a separate package such as ncdu may work well, the same comparison of many folders can be done, to some degree, by just giving du a list of folders to size up. For example to compare top-level directories on your system...
cd /
sudo du -sh ./*

- 139
du -hd1
will list in human-readable format the sizes of all the directories, e.g.
656K ./rubberband
2.2M ./lame
652K ./pkg-config

- 181
GNU du
takes a -b
option.
See the man
page and the info
page for more help:
-b
,--bytes
is equivalent to--apparent-size --block-size=1

- 20,988

- 69
du -csh
-c produces grand total

- 67,283
- 35
- 116
- 255

- 141
- 4
-
3The
-c
doesn't make sense to use together with-s
, right?-s
only displays the size of the specified directory, that is the total size of the directory. – Andreas Storvik Strauman Jun 05 '18 at 10:43
du -ahd 1 | sort -h
will have a better visualization that sorted the items.
$ du -ahd 1 | sort -h
2.1M ./jinxing.oxps
2.1M ./jx.xps
3.5M ./instances_train2014_num10.json
5.9M ./realsense
7.8M ./html_ppt
8.5M ./pytorch-segmentation-toolbox
24M ./bpycv
24M ./inventory-v4
26M ./gittry
65M ./inventory
291M ./librealsense
466M .

- 161
Try
du -hax --max-depth=1 / | grep '[0-9]G' | sort -nr
This helps find large directories to then sift through using du -sh ./*

- 7,533
- 5
- 31
- 58

- 11
- 1
You can use "file-size.sh" from the awk Velour library:
ls -ARgo "$@" | awk '{q += $3} END {print q}'

- 1
- 5
- 44
- 63
-
This gives a more accurate count than du. Unpack a tarball on two servers and use "du -s" (with or without --bytes) and you will likely see different totals, but using this technique the totals will match. – Angelo Babudro Nov 18 '19 at 23:52
you can also use ls -ldh:
ls -ldh /etc
drwxr-xr-x 145 root root 12K 2012-06-02 11:44 /etc
-l is for long listing ; -d is for displaying dir info, not the content of the dir, -h is for displaying size in huma readable format.

- 6,932
-
4This isn't correct, the person asking is clearly looking for footprint of a directory and it's contents on disk. @sepp2k's answer is correct. – blong Jun 05 '12 at 13:16
-
1The ls -ldh command only shows the size of inode structure of a directory. The metric is a reflection of size of the index table of file names, but not the actual size of the file content within the directory. – linbianxiaocao Mar 28 '16 at 18:19
The original question asked the size, but did not specify if it was the size on disk or the actual size of data.
I have found that the calculation of 'du' can vary between servers with the same size partition using the same file system. If file system characteristics differ this makes sense, but otherwise I can't figure why. The 'ls|awk" answer that Steven Penny gave yields a more consistent answer, but still gave me inconsistent results with very large file lists.
Using 'find' gave consistent results for 300,000+ files, even when comparing one server using XFS and another using EXT4. So if you want to know the total bytes of data in all files then I suggest this is a good way to get it:
find /whatever/path -type f -printf "%s\n"|awk '{q+=$1} END {print q}'

- 201
I always install the "ncdu" package and see all the output of all directories with graphical representation. This is because I usually need to know what's taking up the most disk space on my machines, regardless of how much a single directory sums up.
Usage: sudo ncdu /
(You do not need sudo
for folders on which you have read permission).
It will take a while to scan disk usage statistics on the whole file system. It has a nice command line graphical representation and included keyboard navigation using the arrow keys, like going deeper or higher in the scanned path. You can also delete items by pressing D.
I tried with below command since already best answer has been provided
sudo find . -maxdepth 1 -exec du -shk {} \;| awk 'NR >1'| awk 'BEGIN{sum=1}{sum=sum+$1}END{print sum}'
output
sudo find . -maxdepth 1 -exec du -shk {} \;| awk 'NR >1'| awk 'BEGIN{sum=1}{sum=sum+$1}END{print sum}'
679445

- 5,211