12

I've got a file server with a bunch of disks running together in a BTRFS pool and want to add an SSD for caching. I'm not mainly after speeding up things, but want to catch the regular small accesses to be able to shut down the hard disks most of the time of not under heavy use (not running them 24/7 will save energy and should make the disks last longer).

As far as I know, there are currently two SSD caching techniques implemented in Linux, dm-cache and bcache. dm-cache is still told to be more efficient, but development is going on for both and I don't need to tune for the absolute maximum efficiency.

Reading bcache's documentation, I came upon these options:

writeback_delay: When dirty data is written to the cache and it previously did not contain any, waits some number of seconds before initiating writeback. Defaults to 30.

writeback_percent: If nonzero, bcache tries to keep around this percentage of the cache dirty by throttling background writeback and using a PD controller to smoothly adjust the rate.

writeback_running: If off, writeback of dirty data will not take place at all. Dirty data will still be added to the cache until it is mostly full; only meant for benchmarking. Defaults to on.

Setting a large enough value for writeback_delay seems to do the job for me: Only write back once an hour, or (I assume that this would happen) if the cache is running full.

Is this a reasonable setup, and do I have consider anything else to succeed at spinning down the disks? I'm also fine with going a completely different route if it fulfills my requirements.

It seems @gorkypl is looking for another solution on a similar problem, but is having different requirements and environment and didn't receive an answer yet, either.

Jens Erat
  • 2,323
  • 2
  • 21
  • 34
  • Use bcache, please. – mikeserv Jan 08 '15 at 00:06
  • @JensErat I have a similar use case. Have you been able to solve this problem? – Max Beikirch Jul 03 '21 at 12:41
  • This is quite some time ago. I think I partially managed to back a BTRFS raid with bcache, but never got really happy with the setup. Ended up replacing the entire setup with a completely different setup for other reasons later though, and don't completely remember the details any more. – Jens Erat Jul 03 '21 at 16:17
  • You also may want to switch off the feature that large sequential operations (read and write) go directly to the backing device. – Joachim Wagner Nov 10 '21 at 13:26
  • ... as well as the congested read/write thresholds in case your SSD sometimes exceeds the default thresholds. – Joachim Wagner Dec 17 '21 at 11:51

1 Answers1

2

I think your approach is too complicated.

read caching: Nothing to do here. If you have enough ram this is done automatically in linux.

write caching Basically this is what you want. But if the writes go to disk in the end that will cause a wakeup, too.

So you could put affected filesystems directly onto ram-disk /dev/shm or ssd instead.

power saving: I do not think that frequent spin down/up will save power. On the contrary the disks might die earlier so there is additional energy consumption for the production process. Also spin up is very power intensive.

Nils
  • 18,492
  • 1
    The OP suggests to spin up 1 hour after the cache became dirty. Spin-up takes only a few seconds. Even if the disk was at 100W during this time this would only add less than 0.2 W to the average consumption. This is a lot less than the typical savings in power save mode. – Joachim Wagner Nov 10 '21 at 13:19