252

I know you are able to see the byte size of a file when you do a long listing with ll or ls -l. But I want to know how much storage is in a directory including the files within that directory and the subdirectories within there, etc. I don't want the number of files, but instead the amount of storage those files take up.

So I want to know how much storage is in a certain directory recursively? I'm guessing, if there is a command, that it would be in bytes.

Rob Avery IV
  • 3,155

14 Answers14

285

Try doing this: (replace dir with the name of your directory)

du -s  dir

That gives the cumulative disk usage (not size) of unique (hards links to the same file are counted only once) files (of any type including directory though in practice only regular and directory file take up disk space).

That's expressed in 512-byte units with POSIX compliant du implementations (including GNU du when POSIXLY_CORRECT is in the environment), but some du implementations give you kibibytes instead. Use -k to guarantee you get kibibytes.

For the size (not disk usage) in bytes, with the GNU implementation of du or compatible:

du -sb dir

or (still not standard):

du -sh dir

For human readable sizes (disk usage).

See man du (link here is for the GNU implementation).

53

You just do:

du -sh /path/to/directory

where -s is for summary and -h for human readable (non standard option). Use standard -k instead to get KiB.

Be careful however, (unlike ls) this will not show you file size but disk usage (i.e. a multiple of the filesystem block-size). The file itself may actually be smaller, or even bigger.

So to get the files size, you can use the --apparent-size option:

du -sh --apparent-size /path/to/directory

This is the size that would be transferred over the network if you had to.

Indeed, the file may have "holes" in it (empty shell), may be smaller than the filesystem block-size, may be compressed at the filesystem level, etc. The man page explains this.

As Nicklas points out, you may also use the ncdu disk usage analyser. Launched from within a directory it will show you what folders and files use disk space by ordering them biggest to smallest.

You can see this question as well.

Totor
  • 20,040
  • I get a lot of cannot read directory, permission denied is it safe to use sudo du? – Shayan Aug 16 '20 at 14:33
  • 1
    @Shayan it is not dangerous, but will not give you the information about storage. Is that what you mean by "safe"? – Totor Oct 16 '20 at 02:07
  • 1
    I was scared sudo du might change the ownership of everything to root or something else unexpected. @Totor – Shayan Oct 16 '20 at 06:20
  • 2
    @Shayan no, it is purely a "read only" tool. No risk to modify data nor metadata here. :) – Totor Oct 17 '20 at 11:36
51

Note that if you want to know all {sub}folders size inside a directory, you can also use the -dor --max-depth option of du (which take an argument: the recursive limit)

For instance :

du -h /path/to/directory -d 1

Will show you something like

4.0K /path/to/directory/folder1
16M  /path/to/directory/folder2
2.4G /path/to/directory/folder3
68M  /path/to/directory/folder4
8G   /path/to/directory/folder5

PS: Entering 0 as the recursive limit is equivalent to the -s option. Those 2 commands will give you the same result (your given directory recursive human readable size):

du -h /path/to/directory -d 0
du -sh /path/to/directory
27

This will give you a list of sizes from current directory, including folders(recursive) and files.

$ du -hs *
7.5M    Applications
9.7M    Desktop
 85M    Documents
 16K    Downloads
 12G    Google Drive
 52G    Library
342M    Movies
8.3M    Music
780M    Pictures
8.5G    Projects
8.0K    Public
 16K    client1.txt
Simon Liu
  • 371
23

An alternative to the already mentioned du command would be ncdu which is a nice disk usage analyzer for use in terminal. You may need to install it first, but it is available in most of the package repositories.

Edit: For the output format see these screenshots http://dev.yorhel.nl/ncdu/scr

Niklas
  • 339
  • 1
    Miracle! With this I can see which folders that is holding high volume of disk size storage. I even also found all of my files that were mysteriously disappeared. Great tool it is. – Faron Sep 16 '15 at 01:53
7

I like the following approach:

du -schx .[!.]* * | sort -h

where:

  • s: display only a total for each argument
  • c: produce a grand total
  • h: print sizes in a human-readable format
  • x: skip directories on different file systems
  • .[!.]* *: Summarize disk usage of each file, recursively for directories (including "hidden" ones)
  • | sort -h: Sort based on human-readable numbers (e.g., 2K 1G)
4

In Unix, a directory just contains names and references to filesystem objects (inodes, which can refer to directories, files, or some other exotic things). A file can appear under several names in the same directory, or be listed in several directories. So "space used by the directory and the files inside" really makes no sense, as the files aren't "inside".

That said, the command du(1) lists the space used by a directory and all what is reachable through it, du -s gives a summary, with -h some implementations like GNU du give "human readable" output (i.e., kilobyte, megabyte).

vonbrand
  • 18,253
3

For me it worked backwards in the case of the depth and the path on a OS X El Capitán

du -h -d 1 /path/to/directory
GZepeda
  • 39
3

ncdu (ncurses du)

ncdu was previously mentioned at https://unix.stackexchange.com/a/67843/32558 but I think that incredible tool needs deserves a longer description.

This awesome CLI utility allows you to easily find the large files and directories (recursive total size) interactively.

For example, from inside the root of a well known open source project we do:

sudo apt install ncdu
ncdu

The outcome its:

enter image description here

Then, I enter down and right on my keyboard to go into the /drivers folder, and I see:

enter image description here

ncdu only calculates file sizes recursively once at startup for the entire tree, so it is efficient. This way don't have to recalculate sizes as you move inside subdirectories as you try to determine what the disk hog is.

"Total disk usage" vs "Apparent size" is analogous to du, and I have explained it at: https://stackoverflow.com/questions/5694741/why-is-the-output-of-du-often-so-different-from-du-b/55514003#55514003

Project homepage: https://dev.yorhel.nl/ncdu

Related questions:

Tested in Ubuntu 16.04.

Ubuntu list root

You likely want:

ncdu --exclude-kernfs -x /

where:

  • -x stops crossing of filesystem barriers
  • --exclude-kernfs skips special filesystems like /sys

MacOS 10.15.5 list root

To properly list root / on that system, I also needed --exclude-firmlinks, e.g.:

brew install ncdu
cd /
ncdu --exclude-firmlinks

otherwise it seemed to go into some link infinite loop, likely due to: https://www.swiftforensics.com/2019/10/macos-1015-volumes-firmlink-magic.html

The things we learn for love.

ncdu non-interactive usage

Another cool feature of ncdu is that you can first dump the sizes in a JSON format, and later reuse them.

For example, to generate the file run:

ncdu -o ncdu.json

and then examine it interactively with:

ncdu -f ncdu.json

This is very useful if you are dealing with a very large and slow filesystem like NFS.

This way, you can first export only once, which can take hours, and then explore the files, quit, explore again, etc.

The output format is just JSON, so it is easy to reuse it with other programs as well, e.g.:

ncdu -o -  | python -m json.tool | less

reveals a simple directory tree data structure:

[
    1,
    0,
    {
        "progname": "ncdu",
        "progver": "1.12",
        "timestamp": 1562151680
    },
    [
        {
            "asize": 4096,
            "dev": 2065,
            "dsize": 4096,
            "ino": 9838037,
            "name": "/work/linux-kernel-module-cheat/submodules/linux"
        },
        {
            "asize": 1513,
            "dsize": 4096,
            "ino": 9856660,
            "name": "Kbuild"
        },
        [
            {
                "asize": 4096,
                "dsize": 4096,
                "ino": 10101519,
                "name": "net"
            },
            [
                {
                    "asize": 4096,
                    "dsize": 4096,
                    "ino": 11417591,
                    "name": "l2tp"
                },
                {
                    "asize": 48173,
                    "dsize": 49152,
                    "ino": 11418744,
                    "name": "l2tp_core.c"
                },

Tested in Ubuntu 18.04.

Ciro Santilli OurBigBook.com
  • 18,092
  • 4
  • 117
  • 102
2

You can use "file-size.sh" from the awk Velour library:

ls -ARgo "$@" | awk '{q += $3} END {print q}'
Zombo
  • 1
  • 5
  • 44
  • 63
1

This works:

To get the size of each directory under current directory.

du -h --max-depth=1 .

In general:

du -h --max-depth=1 <dirpath>
0

This is the best for me:

find . -type d -exec du -sk \"{}\" \;

You will get all the dirs recursively with at the top the root dir size:

588591456   ./photo
2171676 ./photo/2004
163916  ./photo/2004/AAA
114252  ./photo/2004/BBB
49660   ./photo/2004/CCC
7238148 ./photo/2005
184 ./photo/2005/.thumbcache
33592   ./photo/2005/AAA
228 ./photo/2005/BBB

Zioalex
  • 286
0

To find the total size of the files contained in a folder recursively, omitting symlinks, directory size and implied . and .., I customized the Zombo answer above:

ls -ARgo "$@" | awk '{if ($1 ~ /^-/) {q += $3}} END {print q}'

I needed this to check the upload of the local storage of a web application to a blob storage (Azure) comparing files size in bytes of the remote and the local directories (the Azure blob storage in use don't store file in directories, so I needed to sum just the file size).

To do this, it sums just ls size column rows starting with - character, so:

ls -ARgo list recursively (R) the content of a directory in byte size, omitting implied . and .. (A), without listing owner (g) and groups (o) columns.

~/Scrivania/my_folder$ ls -ARgo
.:
total 1020
-rw-rw-r-- 1 894543 gen  9 09:53 photo.png
-rw-rw-r-- 1 141318 feb  1 09:28 ryxbb3kkit1nfnxwzu7i.webp
drwxrwxr-x 2   4096 feb  1 11:52 sub_folder

./sub_folder: total 864 -rw-rw-r-- 1 137859 gen 13 10:26 186_20230106_corsi_Arogis.pdf -rw-rw-r-- 1 257591 ott 20 12:49 '2010-03-27 - Piano Formativo SNaTSS-1-1.pdf' -rw-rw-r-- 1 484746 ott 19 16:02 CNCOyXtu.html

The awk function sums the third column ($3) of each resulting ls row, if the first column ($1) matches a dash -.

It seems to be enough for my purpose, but, be careful that:

  • this is not properly disk usage, no folders or system functional files computed
  • it will not follow symlinks (need to change the regex inside awk)
0

Including various parts of the other provided answers, here is my suggested command:

cd /path/to/directory/of/interest
sudo du -hsc *

This will list all the directories and the recursive size

cdahms
  • 121
  • 1
    As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center. – Community Mar 10 '23 at 20:40