I've got a file server with a bunch of disks running together in a BTRFS pool and want to add an SSD for caching. I'm not mainly after speeding up things, but want to catch the regular small accesses to be able to shut down the hard disks most of the time of not under heavy use (not running them 24/7 will save energy and should make the disks last longer).
As far as I know, there are currently two SSD caching techniques implemented in Linux, dm-cache and bcache. dm-cache is still told to be more efficient, but development is going on for both and I don't need to tune for the absolute maximum efficiency.
Reading bcache's documentation, I came upon these options:
writeback_delay: When dirty data is written to the cache and it previously did not contain any, waits some number of seconds before initiating writeback. Defaults to 30.
writeback_percent: If nonzero, bcache tries to keep around this percentage of the cache dirty by throttling background writeback and using a PD controller to smoothly adjust the rate.
writeback_running: If off, writeback of dirty data will not take place at all. Dirty data will still be added to the cache until it is mostly full; only meant for benchmarking. Defaults to on.
Setting a large enough value for writeback_delay
seems to do the job for me: Only write back once an hour, or (I assume that this would happen) if the cache is running full.
Is this a reasonable setup, and do I have consider anything else to succeed at spinning down the disks? I'm also fine with going a completely different route if it fulfills my requirements.
It seems @gorkypl is looking for another solution on a similar problem, but is having different requirements and environment and didn't receive an answer yet, either.
bcache
, please. – mikeserv Jan 08 '15 at 00:06