2

Packages are unpacked and compiled on a test system in /tmp/test.

I need to get the maximum size the directory had at any moment during all these steps.

At the moment I help my self by recording the size with

du -sch /tmp/test >> /tmp/size.txt 

in a loop. But this is a very dirty workaround and is not precise. It could be, that the computer is very busy in /tmp/test and the du misses the peak size. Is there a elegant solution?

The available file systems are ext, or btrfs if that helps.

One reader asked for usage examples:

  • When preparing packages for Gentoo Linux, I need to know how much space is needed during compilation. For some packages like Firefox, Boost or LibreOffice it is very important that the package verifies that enough free space is available.

  • I wrote scripts which create many temporary files. It would be interesting to monitor the folder size.

Update: In the meantime I found sysdig which looks promising for this task, but I did not get a folder size yet.

Jonas Stein
  • 4,078
  • 4
  • 36
  • 55
  • you should tell us why you need this and what the hard requirements are. is the process lenghty or easily repeatable? how important is accuracy (what if the result is a little bit bigger than necessary? what if it is much bigger?). the choice of file system also could make a difference in the result (block size differences, efficient packing of small files in some fs). as it is, this is not a well written question. –  Jan 07 '18 at 18:41
  • @hop I think it should be technically possible to measure it quite precisely and I am sure many do this already. I just do not know the right tools and commands. One has to capture the data from the kernel, not with a tool like du or df. – Jonas Stein Jan 07 '18 at 20:38

2 Answers2

1

One possibility might be to monitor file-system events and run some logging command on file-creation and file-deletion. There are some tools that would probably facilitate this approach, such as inotify, fswatch, or the Linux audit framework. You could either try to log the total disk-space after each event or just log the change in disk-space and then used the logs to calculate the maximum size. See, for example, the following SuperUser post:

A different approach comes from the following post:

There the suggestion is made to mount the directory in question on its own partition and then run iostat on that partition. That should allow you to continuously log IO events to that directory.

For further discussion on monitoring disk IO you might refer to the following post:

igal
  • 9,886
  • i second the idea to mount a separate file system on /tmp/test. even df might be (fast) enough for the OP's purposes with that trick. an additional option would be to hack together a fuse file system that monitors the size. –  Jan 07 '18 at 18:37
  • @hop df gets slow, if there are many files. And it sums only one snapshot not the real maximum. – Jonas Stein Jan 07 '18 at 20:40
-2

Schedule a script for every 1 minute

*/1 *. *. *. * script path

The script should be the following

#!/bin/bash
du -sch /tmp/|sed -n '1p' >>outputfile

After some time, sort it and get the highest size.

grg
  • 197
  • 4
    Doesn't this suffer from the same shortcoming that the OP is specifically trying to overcome? I think @jonas-stein is looking for a way to continuously monitor file-size. – igal Nov 20 '17 at 02:36