What do I need to think about when installing Linux on a SSD? You want to limit the writes/reads so the SSD don't get worn out right?
What filesystems are preferred? What filesystems are not?
What about swap?
Anything else that is good to think about?
I think I will use Debian or Mint.
asked 21 Sep '11, 11:34
The biggest issue which I've found with SSD is the huge cost for a much smaller drive than what you can get with a comparable price range in an IDE/PATA/SATA HDD.
The other (and even bigger issue for me) is that the SSD HDDs don't support all of the various RAID levels ( like the one I use, RAID 0+1 )
Another issue is that if you're BIOS doesn't support SSD, such as with an older system, that too can cause problems for you.
I've not had a ton of experience with the SSD HDDs, but I know that Linux has much better support for the older, more established formats (IDE/PATA/SATA) than it does for SSD. Over time, of course, this will improve and SSD HDD support will be better a year or two down the road from where it is now. I tend to go for stability, so I run 32-bit versions of Linux vs. 64-bit versions because the support for 32-bit is better now than it is for 64-bit. Again, this too will improve over time.
I prefer stability, so Ubuntu LTS over Ubuntu NON-LTS for me on desktops, Debian or CentOS on servers, 32-bit over 64-bit, and IDE/PATA/SATA over SSD when it comes to HDDs.
ALL of the above of course is my own personal opinion and experiences. I tend to be an idealist who prefers stability and shoots for "Best Practices" over what "just works, even if it isn't ideal or Best Practices".
Keep in mind that Linux Mint, Zorin OS, Kubuntu, Xubuntu, Lubuntu, Ubuntu, all are based in Debian. Ubuntu LTS pulls initially pulls it's code from debian-testing and the NON-LTS versions of Ubuntu pull from debian-unstable (whereas Debain itself goes debian-unstable > debian-testing > debian-stable <-- which is where Debian releases come from) --- So as such, Ubuntu and it's forks aren't going to be as stable as just plain old Debian is. Then again, maybe people LIKE and WANT bleeding edge, and that's cool if they do, but then just run Gentoo, Slackware, Linux From Scratch and compile it yourself, or atleast run Arch (but know that Arch uses unsigned packages - which I would never trust or use on a production system... only a system for testing / developing on.)
Lastly, file systems..... ext3 is the way to go. It's old, stable, and Journals.
As for partitioning, if this is just a home system for desktop use, just toss everything on one drive and one / partition. You MAY want to pu8t /home on it's own hard drive if you distro hope because that way you can just unmount it, install, remount it and continue on with it as is.
If this is for a server setup, SSDD HDDs haven't proved themselves enough to me yet to use them in that capacity, but to answer your question on partitions...it depends on what kind of server it is. Some write to the hard drive(s) more than others do.
I also recommend a centralized /var/log server where all other servers point to it instead of the local /var/log directory path on each local system.
I don't claim to be an expert, but I've had no problems on my little SSD during the past 10 months.
There are lots of opinions about SSDs and some disagreements. Some believe that the following items are important to think about... - TRIM - alignment - disk activity - unused disk space - swap
An easy way to get TRIM going is to use the "discard" option on ext4 and kernel 2.6.33 or later. Ubuntu LTS default kernel is 2.6.32 and the forums do have some discussion and concern about using this kernel with SSDs.
Alignment parameters will vary depending upon the SSD model. Do some research and take your time to optimize performance.
Disk activity is a puzzle, or at least it was a puzzle for me the last time I tried to figure out what was happening. Popular Linux distros that use Gnome or KDE constantly write to the hard drive every few seconds. Here are two things that will decrease writes to your SSD.
I have /var partitions on my data drive, one for each distro that's installed on the SSD.
There is much discussion about journalizing and relatime and noatime. I lack the expertise to determine who is right, but there appears to be some validity to each opinion. You might go with journalizing and relatime unless you find a compelling reasons to abandon the advantages of these features.
There is a theory that unused disk space used for "wear leveling" will increase the life of an SSD materially. For example, if you have a 60GB drive, you would format maybe 50GB. Or you would format the entire drive but be sure that some significant portion remains unused.
If you can afford an SSD, you probably can afford some RAM. So, if you do have swap on your SSD, be sure that you have enough RAM such that swap is not constantly used. I put swap on a rotating data drive.
IMO, as SSD technology matures, things like TRIM, alignment, and unused space for wear leveling will be taken care of by the manufacturer/vendor. At this point, SSDs are for experts and for those who are willing to risk making mistakes.
I'm probably making mistakes that shorten the life of my SSD. But I store data on a rotating drive (and back it up constantly). The SSD is actually removable - used in a rack that fits inside a 3.5" bay, and I have another 2.5" boot drive (a rotating drive) that I throw in the rack and update occasionally in anticipation of a sudden problem with the SSD.
Actually, the second boot drive is a backup drive. It is used only for backing up files that reside on the data drive. A third boot drive (rotating drives are so cheap now!) is used occasionally for experimenting with different distros and doing all those things that an enthusiastic klutz will do.
The little SSD has been absolutely stable using PCLOS & Mint, and the responsive systems are fun to use.
answered 25 Sep '11, 13:29
Not an expert at all. Does not write code.
Seems to me like writing and reading isn't the issue as much as deleting and writing. There isn't a needle searching all over a surface, so it seems that retrieving data is not an issue. I guess they get more scrambled the more you rewrite?
It's been more than a year since these last posts. Where does SSD technology stand now?
Understand that this is the only thread I've read on the subject as it is the first suggestion from a search engine.
Partition schemes make the most sense to me as a user. Place all of your OS and program files on the SSD in a partition and your data is easier to locate due to the nature of the structure of the hardware and structure of the file system that the drive is formatted to. At least, that's what I deduced from the logic I learned from reading the first 1/3 of Arch Linux Handbook.
answered 30 Jan '13, 05:58
An SSD is very fast at the sacrifice of longevity. The "sectors" of an SSD can be written to only a limited number of times (I have read 1000 times is an approximate limit). The drive's firmware does it's best to avoid writing the same areas all the time, thus extending the life of the drive. As the drive become full, the firmware has a much tougher time doing that.
Makes no difference. A write is a write. If you open a document and save it, then you have re-written the document to disk. Even if you open a document and don't make changes, the "file access date" attribute changes and is re-written (on certain file systems).
The files that you write are the smallest problem. Any time you start up or shut down, possibly hundreds of files are modified by the system and re-written to disk.
All drives and disks use partitions, whether it be floppy, hard drive, flash drive, SSD, CD, or DVD. The only disk without a partition is a truly blank disk. For Linux, the best practice is to use 3 or 4 partitions: boot, root, home, and swap, with boot being optional. Windows wants to install on a single partition. The Windows installer tries to save your old documents during an upgrade, unless you choose "custom install" in which case you had better have a good backup.
If you have an SSD that is nowhere near being full, that drive could last a very long time, in all fairness. The firmware is designed to maximize the life of the drive and in most cases can do a very good job.
I have not purchased SSD drives. I can't give first hand advice and my opinions are based solely on many tech articles that I have read. What I can say is that I have several standard drives that are 10 years old and still going strong. They continue to pass all diagnostics. I use them every day.
Regardless of what you decide, there is no excuse for not keeping full; current, backups. Any drive can fail at any time.
answered 01 Aug '13, 10:07