75

I bought an SSD and I am going to set up my desktop system with a completely fresh Linux installation.

SSDs are known to be fast, but they have a disadvantage: The number of writes (per block?) is limited.

So I am thinking about which data should be located at the SSD and which at the HDD drive. Generally I thought that data that changes frequently should be placed on the HDD and data that doesn't change frequently can be put on the SSD.

  • Now I read this question, with a similar scenario. In the answers there is written: "SSD drives are ideally suited for swap space..."

    Why are SSDs ideally suited for swap space? OK, I see high potential for raising the system's performance, but doesn't swap data change frequently and hence there would be many writes on the SSD resulting in a short SSD lifetime?

  • And what about the /var directory? Doesn't its contents change frequently, too? Wouldn't it be a good idea to put it on the HDD?

  • Is there any other data that should not be located on an SSD?

user41961
  • 753
  • As an added point, we used a raid 1 with SSDs on our AIX production DB. Granted they are probably enterprise grade SSDs (havn't actually checked), but still... consumer grade would still be acceptable for most applications where your /proc and /home directories reside on your SSD. – Chad Harrison Jun 27 '13 at 16:29
  • 8
    @hydroparadise /proc is maintained by the kernel and does not live on disk, whether spinning-platter or SSD. – user Jun 29 '13 at 13:33
  • Oops, had a brain fart. /var or /etc would be suitable replacements for /proc for the example. I suppose /proc would still be relevant if it spilled over to using swap. – Chad Harrison Jul 01 '13 at 13:14

6 Answers6

92

If you worry about write cycles, you won't get anywhere.

You will have data on your SSD that changes frequently; your home, your configs, your browser caches, maybe even databases (if you use any). They all should be on SSD: why else would you have one, if not to gain speed for the things you do frequently?

The number of writes may be limited, but a modern SSD is very good at wear leveling, so you shouldn't worry about it too much. The disk is there to be written to; if you don't use it for that, you might just as well use it as a paperweight and never even put it into your computer.

There is no storage device suited for swap space. Swap is slow, even on SSD. If you need to swap all the time, you're better off getting more RAM one way or another.

It may be different for swap space that's not used for swapping, but for suspend-to-disk scenarios. Naturally the faster the storage media used for that, the faster it will suspend and wake up again.

Personally, I put everything on SSD except the big, static data. A movie, for example, doesn't have to waste expensive space on SSD, as a HDD is more than fast enough to play it. It won't play any faster using SSD storage for it.

Like all storage media, SSD will fail at some point, whether you use it or not. You should consider them to be just as reliable as HDDs, which is not reliable at all, so you should make backups.

frostschutz
  • 48,978
  • 16
    This answer totally ignores the fact that lots of data is written rarely but read frequently. – jwg Jun 27 '13 at 12:31
  • 24
    Ummm, how does that change the answer? The theme here is "gain speed for the things you do frequently". What's it matter if that is reading or writing? The point is use the SSD for things that involve lots of disk IO regardless of reads or writes. – Pete Jun 27 '13 at 14:12
  • While I'm a Windows guy I have all the frequently-accessed stuff that's small enough on my SSDs. Despite heavy use their life is wearing away at about 1%/year. (Accurate data as my drives actually report the % used.) – Loren Pechtel Jun 27 '13 at 23:55
  • 1
    @LorenPechtel So you're saying that you actually expect that SSD to be functional in about a hundred years' time? Somehow I doubt it will be, regardless of usage patterns. :) "Increasing at a constant rate" does not necessarily translate to "accurate", particularly when you are (as is most likely the case) measuring one thing but reporting it as another. If you are measuring write cycles but reporting it as lifetime, that ignores everything else that can go wrong, especially over a longer period of time (physical materials and component fatigue comes to mind as one possibility). – user Jun 28 '13 at 08:15
  • 7
    SSDs are better specifically at random IO, not just any IO. Normal drives will be just as good for sequential access such as media. – JamesRyan Jun 28 '13 at 11:41
  • @MichaelKjörling Of course something else will fail long before 100 years. My point was about how slowly the write life wears away even with fairly heavy usage. – Loren Pechtel Jun 28 '13 at 19:29
  • 1
    @JamesRyan That assumes that access is indeed sequential at the physical disk block level. For relatively static content it likely will be, but it isn't a given: even files written (relatively speaking) all at once, or which have space reserved for them at the beginning, can be fragmented. – user Jun 29 '13 at 13:19
  • 3
    I'd like point out that between a heavily written drive and a paperweight there are read-only media like optical discs. I'd also agree with this answer's recommendation that most normal users don't need to worry about SSD write cycles. Unless you are doing something unusual or running a service that heavily uses the file system, the SSD will probably last more than long enough. – jw013 Jul 02 '13 at 21:35
  • 1
    what is a 'modern' ssd? not to buy one manufactured before year x? – n611x007 Jul 12 '13 at 13:55
  • BTW SSD is not only good for speed but also less noisy and much more energy efficient. That's also nice when watching movies. Not using any HDD at all is one easy step to build a passively cooled system. – rudimeier Sep 11 '16 at 00:12
31

Ok, so the goal is to get as much bang for the buck as possible - Speed vs. the price of replacement hardware (assuming a single large harddisk and medium-size SSD, which seems to be the norm). To simplify you can to weigh how much you notice the speed increase from moving a file to the SSD against the number of sectors written to move that file to the SSD.

  • Files which need to be read a lot and written to rarely (such as the OS and programs) would probably be the most obvious to move to the SSD.
  • Files which are written once and read to many times at a fixed data rate where the HDD is fast enough (for example music, video) should probably stay there. They are usually not modified, but consider that they are written to a lot of sectors.
  • Small files which are modified a lot (such as some temporary files) are more complicated. For example, given a sector size of 512 bytes, you can overwrite a single-sector file 20,000,000 times before "consuming" the same amount of writes as writing a single 1 GiB file once. If the SSD takes care of wear leveling these should be equivalent.

Of course, even the best calculations also use up the most precious resource of all, time. So in the long run you're probably best off keeping it simple and buying new hardware slightly more often than the absolutely ideal case.

l0b0
  • 51,350
  • 2
    speed vs price of replacement vs. data loss. yeah not everyone uses backup even if they should. +1 – n611x007 Jul 12 '13 at 13:56
  • 1
    I have to admit that I like the concept of sector-writes as a measure of storage usage, particularly in the case of SSDs. :) – user Aug 11 '13 at 19:59
2

Agreeing with others, you should put pretty much everything except may be very large (video) files to avoid wasting expensive SSD space.

However, you should also make sure TRIM is enabled:

  • Your SSD supports TRIM
  • Your partition is aligned on a multiple of EBS
  • Your file system supports TRIM on your file system (ext4 usually does)
  • You run fstrim regulary (probably in a cron weekly)
  • You keep at least 25% free disk space[1]

Remember to back-up your data.

UPDATE:

Wernight
  • 131
  • 4
  • 1
    Are there any sources about the 25% free disk space? – thiagowfx Jan 05 '15 at 15:13
  • I've added a reference. This is similar to memory and hash map, because it's garbage collected. Below that the GC overhead will become rapidly an issue. – Wernight Jan 06 '15 at 10:00
  • For posterity, I would like to add that the section you referenced was removed from the ArchWiki in this revision, with the following comment: "it will take some effort to buy an SSD without TRIM or overprovisioning: http://www.kingston.com/us/ssd/overprovisioning". – Spooky Feb 14 '19 at 00:25
  • 1
    In reality you either put swap on SSD or you configure no swap. There really isn't any situation where you want to swap on HDD is SSD is an alternative. – Mikko Rantalainen Mar 11 '19 at 11:49
  • @Wernight What is EBS? – Mmmh mmh Dec 01 '20 at 08:21
2

Beside all the answers here there is a little tip I like. I have started to use ramdisk again with my SSD to slow wearing effect a little. I am using it for a browser cache (well whole browser profile), various temps, some unessential logs etc. (via symlinks)

My ramdisk is set in fstab as follows:

tmpfs       /mnt/ramdisk tmpfs   nodev,nosuid,size=512M   0 0

More RAM you have larger ramdisk you can use efficiently. With this I have boot/shutdown script. Various experience with writing ramdisk backup on encrypted device/folder even with lowest priority on boot and highest on shutdown.

This speeds up the system a little and saves some write cycles. Good thing may be a cron job doing rsync every 15 minutes?

#!/bin/bash

### BEGIN INIT INFO
# Provides:          Ramdisk control
# Required-Start:    $local_fs
# Required-Stop:     $local_fs
# Default-Start:     2 3 4 5
# Default-Stop:      0 6
# Short-Description: Start/stop script at runlevel change.
# Description:       Ramdisk auto backup and restore
### END INIT INFO

PATH=/sbin:/bin:/usr/sbin:/usr/bin
USER="user1"
RDISK=/mnt/ramdisk
BACKUP=/opt/
#/home/$USER/BackUps/

#echo "$(date) $1" >> $BACKUP/rd.log

case "$1" in
    stop)
        rsync -aE --delete $RDISK $BACKUP
        ;;
    start|force-reload|restart|reload)
        #restore ramdisk
        cp -rp $BACKUP/ramdisk/* $RDISK 2> /dev/null
        ;;
    *)
        echo 'Usage: /etc/init.d/ramdisk {start|reload|restart|force-reload|stop|status}'
        echo '       stop                       - backup ramdisk data'
        echo '       start|*                    - restore ramdisk data from backup'
        echo '       - default backup location is /xxxxx'
        exit 1
        ;;
esac


exit $?

Little warning for Ubuntu users, don't use /media/user/ folder for ramdisk backups as it gets resets by some updates so I was losing profile data periodically. Also with Ubuntu I had some difficulties making ramdisk bakups on encrypted home folder.

tomasb
  • 121
0

If you don't want to spend time to dispatch your data over HDD and SDD you may use your SDD as a cache.

Emmanuel
  • 4,187
-2

I am sorry, bad answers .Of course you can and should build a very fast system and still moving most written folders to HDD. Move /tmp to /tmpfs or create /tmp partition on HDD also move to HDD and create symlinks on original folders for /var/log /var/spool and /var/tmp (don't put /var/tmp on tmpfs as there is data that shoud be accessible across reboots). Move to HDD and create symlinks for ~/Downloads ~/Videos ~/Music ~/.config ~/.cache ~/.thunderbird ~/.mozilla ~/.googleearth ~/.ACEStream and others that you know or find out write frequent caches (always find where your specific browser cache is and move it to HDD Chrome and Firefox are covered with these ones I believe but check for yourself). If you need to edit a video file you can movee it to ssd otherwise 99% of documents and media have no benefict being in SSD. Also as HDD is far less used by systen these tricks have a negible impact on performance and huge difference in durability of SSD. Move to HDD and create symlinks for your cloud folders (ex. dropbox). Consider also moving /var/www if you're apaching it. Now you have a very fast system with almost no speed difference and with much much less wear out.