15

Example I have large/small sized filesystems JFS2/EXT3, whatever (and various OS, Linux, AIX), but some of them are on ex.: 90%, 95%, 98% usage.

Questions: Having filesystems nearly full does something bad? Performance problems or FS corruption or HW problem?

UPDATE:

  • The question is regarding corporate environment. Does anyone have authentic essays/URL's regarding the effects? :)

  • "Which directories are on these filesystems?" - any, ex.: SAP, ORACLE, etc.

  • The disks are usually from SAN.

slm
  • 369,824
gasko peter
  • 5,514
  • Which directories are on these filesystems? –  Jul 30 '13 at 10:46
  • Is this for corporate or personal computer usage? – 41754 Jul 30 '13 at 11:14
  • If your FS is corrupting and it does not tell the OS, you have no good FS. If it does but the OS does not translate properly to the user, OS fails handling errors and/or UI for the particularly targeted user. You of course develop taking care FSs are not infinite. – 41754 Jul 30 '13 at 11:18
  • Your question seems to be about storage provided through SAN for databases. If so the answer depends on the storage-settings within these databases - do they auto-extend yes or no? – Nils Aug 01 '13 at 12:31
  • no, NO AUTO-EXTENDING – gasko peter Aug 01 '13 at 14:11

7 Answers7

14

A filesystem does not break just because it's full, so there is no problem from the filesystem's point of view. Files are more likely to fragment once the filesystem is near full, and performance problems are possible depending on the filesystem, but that is usually not critical.

The real problem is that on a full filesystem, any write will fail. So it depends on what will be trying to write on such a filesystem.

Many programs must be able to write / save data in order to function properly. So if your filesystem is full when something is trying to write, you will experience data loss or breakage on the application layer. "I tried to save your data, but couldn't" is a case that many programs do not handle particularly well. Worst case the program will have started to overwrite the old save file before noticing there won't be enough room for the new save file, so you lost both.

For system critical things (e.g. any writes happening at startup/shutdown, logging facilities, etc.), a full filesystem could in worst case render your system unable to function properly; ext* filesystems have a root reserve for that very reason, to allow system things (root) some free space when everything else is full. This is a case where you should provide some additional storage or delete some old stuff.

frostschutz
  • 48,978
9

From a production viewpoint it's a bad state to be in. Firstly performance degrades as disk usage increases. When a disc nears full capacity there are less sequential areas of the disk to store data in. This impacts performance due to the additional disk seeks and latency effects waiting for a free sector to reach the disk head.

More important is the potential effect on the system. Is the server providing a vital service? How long will it be before the development and operations teams become aware that services are down? How long will it be before users get angry when there is no service available? Applications will often freeze when there is no storage to write to. There may be knock-on effects which can cause further problems - adding even more time before services are fully restored. And when service has been restored the system state may be unbalanced - for example a huge backlog of incoming data during service down time causes delays in processing.

suspectus
  • 6,010
  • +1 for "When a disc nears full capacity there are less sequential areas of the disk to store data in" - never even thought of that! – Suman Jul 30 '13 at 16:21
  • This is not a disk. We are talking about LUNs provided by a SAN. Normally this is a striped portion across a large number of physical disks. – Nils Aug 01 '13 at 12:33
1

It's not intrinsically bad, but something to be extremely careful about. You do not want to completely run out of space on a drive when the operating system requires more incidental space than you have, or whatever your computer tries to write next will simply fail. The consequences of that will depend on what it was trying to save.

Sudden "space-eaters" can include an unexpectedly large swap file (perhaps caused by memory leaks, etc), a sleep image which can take up as much HDD space as you have RAM, and large output files from programs. You simply need to always be certain that you're not going to run out of space.

Best-case scenario is when there is no more space for a swap file and the system will usually just crash with no long-term ill effects.

It's usually safe to fill up partitions that do not have an operating system on them (ie on an external hard drive where you manually archive your data). Performance can decrease if the drives fill up, but if you're constantly running with high disk usage then you should really just get another hard drive so you can stop worrying.

Moriarty
  • 153
0

You should be considering a few things:

  1. Do you NEED the data that is consuming all of that space? There may be a lot of data that you WANT, but not that you need. And if you decide that you NEED it, make sure that you are making every use of the data. In other words, is there information about your business, market or customers that you can obtain from the data? Can you get a sense
  2. If you need, all of the data consider the cheapest way to store it. In house servers, remote storage, compressed, the most recent X% of the data. If you have no budget for expansion or remote storage, get rid of the oldest 10% of the data simply to improve performance.

With the case of an external hard drive where the OS is not present, you can safely use up most of the drive without noticeable performance degradation. For a hard disk with the OS, there are cache files whose size depends on the OS and the types of tasks that it performs.

For linux, check this out: How can I benchmark my HDD?

See these texts for more thorough exposition: http://www.amazon.com/Memory-Mass-Storage-Giovanni-Campardo-ebook/dp/B00F76KCGY/ref=sr_1_4?s=digital-text&ie=UTF8&qid=1420389894&sr=1-4&keywords=hard+drive+performance

http://www.amazon.com/SImple-Choosing-Installing-Upgrading-Super-fast-ebook/dp/B00LNZTXFE/ref=sr_1_2?s=digital-text&ie=UTF8&qid=1420389894&sr=1-2&keywords=hard+drive+performance

Anthon
  • 79,293
0

Drives being full would lead to low performance of your PC. I am running my PC on Ubuntu 12.10 and had my drives full. My applications used to open up very slowly and booting used to be slow. So i brought a external hard disk and maintained 50-60% disk usage from then. I shall suggest you the same.

Rui F Ribeiro
  • 56,709
  • 26
  • 150
  • 232
0

If this is "static" database storage there is no harm in filling it up - especially when auto-extending of the DB is turned off. Anything else would be a waste of valuable SAN-space. Monitoring can either be turned off for these filesystems, or the warning-level should be raised to 99 or even 100%.

This is only true for non-growing data, so logs should go elsewhere. The log-storage should be closely monitored though. And it should be large enough so that an admin can react to a warning from monitoring in time.

Nils
  • 18,492
0

There are three really good podcasts about filesystems and if I recall correctly and how they work like "automatically defragmentation" which maybe helps you to understand your problem better. I think those 3 episodes are totally worth listening to if you are interested in file systems. But be warned, it's extremely geeky:

Episode 1 2 and third is number 58 which I can't post because of too little reputation \:

The podcast is called hypercritical and the host is John Siracusa.

hashier
  • 220
  • 1
  • 5