6

Var is showing as full to many apps like Nagios, Puppet, and the LVM tools (pvs, vgs, etc)

df -h output

6.0G  4.3G  1.4G  77% /var

vgs output

/var/lock/lvm/V_rootvg:aux: open failed: No space left on device
  Can't get lock for rootvg
  Skipping volume group rootvg

lsof +L1 shows nothing under var anymore, so I don't think there are unlinked files which have yet to be cleared from the /var filesystem. I don't understand why 1.4G free on a 6G filesystem is considered full. I know some space is reserved by the system on each filesystem but that can't be it, it's too much space. The filesystem is ext3 on Red Hat 5.

dumpe2fs 1.41.12 (17-May-2010)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          c8f44510-e8f7-4e2e-950a-1410b069910e
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              393216
Block count:              1572864
Reserved block count:     78627
Free blocks:              1183083
Free inodes:              388144
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      63
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:       Mon Apr 29 13:12:02 2013
Last mount time:          Wed Oct 23 19:10:44 2013
Last write time:          Wed Oct 23 19:10:44 2013
Mount count:              6
Maximum mount count:      -1
Last checked:             Mon Apr 29 13:12:02 2013
Check interval:           0 (<none>)
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      8766dfd5-c802-4bc3-81cc-21869e810656
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             32M
Journal length:           8192
Journal sequence:         0x0112568e
Journal start:            3334
Graeme
  • 34,027
  • Did you know that any files with open file handles won't be removed until the file handle is closed? – Elliott Frisch Apr 02 '14 at 17:41
  • 4
    What is the output of df -i /var? – Hauke Laging Apr 02 '14 at 17:41
  • 1
    @HaukeLaging fwiw the dumpe2fs shows 388,144 inodes free. – Bratchley Apr 02 '14 at 17:58
  • @GreggLeventhal, to get a rough check of how much space is deleted but still opened, you might try running lsof -s | egrep "(deleted)" | awk 'BEGIN {total=0} {total=total+$7} END{print total}' (be forewarned, I haven't tested this) – Bratchley Apr 02 '14 at 18:16
  • 3
    @JoelDavis, I'm betting that count is wrong and df -i will show he is indeed out, and a fsck will fix the count in the superblock. – psusi Apr 02 '14 at 18:20
  • @JoelDavis I guess that is the number at mount time. Would be a bit extreme to write the superblock every time a file is created or deleted, wouldn't it? At any rate it seems to me that this number does not get updated during normal operation. Try for yourself. – Hauke Laging Apr 02 '14 at 18:24
  • no thanks I believe you. I just mentioned it in case it was overlooked (and I didn't notice that the last mount was back in October) which it doesn't sound like happened. – Bratchley Apr 02 '14 at 18:34
  • @JoelDavis I was indeed unaware that dumpe2fs gives this information. – Hauke Laging Apr 02 '14 at 18:49
  • Is it an option to unmount and fsck the volume (if the solution does not turn out to be the inavailability of inodes)? Shouldn't be a problem if it's not usable anyway. – Hauke Laging Apr 02 '14 at 18:50
  • Have a look here, does that answer you? – terdon Apr 02 '14 at 19:27
  • @psusi You were right, I am out of inodes. What do I do now? – Gregg Leventhal Apr 02 '14 at 19:45
  • 1
    @GreggLeventhal if you extend the filesystem you'll get more inodes available. Since you're on LVM you can do it without unmounting the filesystem with lvextend -r Alternatively, you can see if there are files you can unlink. – Bratchley Apr 02 '14 at 19:49
  • @JoelDavis I can't even see if I have free space in the volume group because /var being broken is messing with LVM tools. – Gregg Leventhal Apr 02 '14 at 20:20
  • 1
    @GreggLeventhal Delete a few old files in /var/log or move them to tmpfs or another volume. – Hauke Laging Apr 02 '14 at 20:37
  • @GreggLeventhal like I said earlier and Hauke said in the last comment, deleting a few files underneath /var is an option. Doing so would likely free the inodes required to create the lock file. Otherwise you can temporarily disable locking in /etc/lvm.conf by settings locking_type = 0 (just be sure to revert the change once you've resolved the issue). – Bratchley Apr 03 '14 at 14:04

2 Answers2

2

Looking at the comments others have helped you diagnose you're out of inodes. If you need to make a few available so you can get some basic access back to your system then you could delete the following files on a CentOS 5 install, assuming you can live without them.

Example

$ sudo rm -fr /var/log/*.[1-9]?(.gz)

This will remove any of the previously backed up files in /var/log. This should buy you a few dozen inodes to start.

Counting inodes

using df

I usually use the command df to determine the number available.

$ df -i /
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/mapper/VolGroup00-LogVol00
                     59932672  807492 59125180    2% /

using tune2fs

You can also use tune2fs. With it you'll need to give it the path to the LVM LV mapper.

$ tune2fs -l /dev/mapper/VolGroup00-LogVol00 | grep -i inode
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Inode count:              59932672
Free inodes:              59126861
Inodes per group:         32768
Inode blocks per group:   1024
First inode:              11
Inode size:       128
Journal inode:            8
First orphan inode:       21561629
Journal backup:           inode blocks

Freed up some inodes, now what?

With some breathing room you basically have a couple of options.

  1. I would start by trying to quickly get a list together of files that can be targeted for deletion, so you can begin to get more headroom. I'd focus on /tmp and /var some more potential files to be removed.

  2. If you have old versions of Java or anything installed under /usr/local or /opt I'd pick on those next.

  3. I'd start formulating a list of installed RPMs that can be uninstalled

  4. If you've been using YUM to do updates on this server you can clear out its cache.

    $ sudo yum clean all
    
  5. Look into adding additional space.

slm
  • 369,824
  • How about /var/log/*.[1-9]?(.gz) for matching old log files (extglob required). – Graeme Apr 02 '14 at 23:43
  • @Graeme - yes that's better, I'll add it, thanks. Careful or ppl will know that we've run out of inodes before 8-) – slm Apr 02 '14 at 23:47
1

One likely reason for running out of inodes is that a large number of files has accumulated in a particular directory for whatever reason. You could check the usual suspects, eg /tmp, /var/tmp, /var/log etc. If you don't find anything, here is a command that I have cobbled together to list the top 50 directories in the filesystem containing the most files/directories in their first level.

find / -xdev -type d -exec sh -c '
  num=$(find "$0" -maxdepth 1 | wc -l); echo "$num $0"' {} \; |
  sort -n |
  tail -50

Note the top level of mount points are included as well - this is not trivial to exclude.

Graeme
  • 34,027
  • It was the pnp4nagios directory under /var/spool/ it creates a very large number of small files. I ended up creating a logical volume and mounting it directly to this directory. This resolved my issues. – Gregg Leventhal Apr 03 '14 at 19:24