2

running SLES 11.4 on a server, there are ~5 disks as Raid-5 via LSI megaraid, which provides 2.2 TB of usable space... when creating this it showed up as /dev/sdb and I created one parition as XFS, mounted it as /data and that showed up as 0% use with 2.2TB of space available. A df -h still shows the mount as 2.2T available, but at 100% full.

If I do, as root, du -sh /data/* I get a listing which adds up to less than 100GB. What options do I have to rectify this? What might cause something like this?

ron
  • 6,575

1 Answers1

0

reboot fixed the problem, one user was doing a lot of parallel I/O which in the past has caused problems. my XFS mount came back @ 85GB used 2.1T free. After reboot, also did umount /dev/sdb1; xfs_repair -n /dev/sdb1 and that reported no problems. I could not do xfs_repair before the reboot even got an XFS library initialization error so something was messed up after 40 days of uptime.

  • There were no hidden files under /data/
  • /dev/sdb1 was mounted as /data
  • there is simply /data/ron and /data/john and /data/misc; doing a du -sh /data/* showed values adding up to only ~85GB for each of the ~8 folders under /data. Nothing more to explain, don't know what else to do with XFS other than reboot and xfs_repair -n.

any insight or advice i guess leave as a comment

not the first time I've had this happen with XFS

ron
  • 6,575