Why not Raid 0?

I got a Dell R610 where I tried to put some SSD’s in, but the raid controller in it didn’t work that well with them.

Instead I decided to just use two 15k rpm SAS disks, and then have them in raid 0 to give me both some performance and capacity.

The data on the raid 0 is just some temp. data, which is only on the raid0 because it is a bit faster to work with there, compared to the 5400 rpm disk it came from. The worst thing happening if the raid0 loses all its data, is that the server has to copy the data over again from the slow disk, and then it will continue to work as before.

Still I am told raid 0 is stupid to use, and I should use ssd. I can’t get an answer to why I should not use raid 0, when I don’t really care about the data on it. I could understand it if it was important data, but this isnt.

So does anyone here have a reason to why it is bad to use raid 0 to get some more performance and capacity out of two disks, while the data on it can be lost without any issue?

Theres absolutely no reason why you cant use them. Most people recommend to go down the SSD route since if speed is the primary concern ; SSD is generally going to be more than enough.

I think people don’t want to recommend RAID 0 is because of the increased risk of data loss should a drive fail in the array, and they maybe don’t want anything coming back on them. But as you mention, the data isn’t important, so as long as the system admin is aware of the risk, who cares, use RAID 0 if you see fit.

In fact I manage 15 servers, each with RAID 0 arrays at work. Why? The only data on the array is temporary cache data, where speed is more important than retaining anything. We have separate systems that contain the important stuff.


Hell I actually use ramdisk for SQL data dump of small databases before compressing and shipping for archival, that’s technically worse that raid 0, but it doesn’t matter, it’s all about the solution you want to roll out based on the importance of the data

As long as you understand the limitations and risks, there are certainly use cases for RAID0.

I used to use ramdisks for temp storage, worked well. Currently setting up a Raid0 NVME for testing with windoze, should be fun. I’m a believer in Raid0 for certain things, Raid1 or 10 for others, and really anything other than a Raid5 for everything else. Raid5 IMHO is worse than a Raid0 as it’s a false sense of security.

My plan was to use raid 5 for storage with 4 drives. It should allow one disk to fail, doesent it?

Also I didnt mention this before, this is just my home server. Nothing I run on it is mission critical in any way, and absolutely nothing on it is used in any professional way.

Everything on it can be lost without it being the end of the world, it would just be annoying.

Raid 5 allows a single drive failure yes, however the issues arise when resilvering the array without any failsafe.

If everything can be lost without any real loss other than time, than in your stated use case it’s an acceptable risk.

Way back in the day I was bitten by Raid5 and vowed never again. I run Z2 or Z3 arrays now and no longer sweat it, other than the cost at times, lol.

1 Like

What happened ? Raid 5 should have been stable enough in most circumstances.

Umm, that should have been an obvious observation? With Raid 5, if you loose another drive while resilvering the array, your done for and loose everything.

1 Like

Well yeah I thought something else besides the stress taking out the drive and hence whole array. I thought some other unknown reason maybe

Torn on what to do…

I have ordered a H310 flashed to work in IT, but not sure if I should try it or not… The problem is the way unraid handle the disks. One entire file is put on one disk. So when reading it again, only one disk will be read from. If that disk is also the one unraid decides to write a new file to, the disk has to both read and write at the same time, which gives me very bad disk speeds.

Having the raid controller handle all the drives, makes it so I hardly notice if I write a file while also reading.

Using the H310 would make me able to use an ssd for cache, but comparing a VM running from the ssd, to a VM running on the two 15k sas disks, I don’t really see any difference inside the VM.

I also tried Freenas, as it seems to have other options for making a disk array, but in unraid there are so many apps which is installed with just a few clicks, and just works right away. It is easy to map various network shares, and storing files in memory is super easy too. In Freenas I found that ex plex is treated more like a vm, so it isnt that easy for it to access system resources. On unraid I just point plex to /tmp and then it transcode into /tmp/Transcode, which is in ram. To do that in Freenas, the guides all said to create a ram drive at the size needed, which involved messing around with fstab, and same when mapping different shares to it too.

You’ve seen the differences.

FreeNAS is built for speed and security at the expense of ease of use. Unraid is made to be easy to use and not much else, plus costs money.

These days when you have software raid that’s just as fast as hardware raid, when implemented correctly, there’s no reason to use hardware raid as if the card dies, your screwed.

1 Like

Nice, thank you so much…

If you do not care about Data or integrity such as swap space, temp files etc by all means use Raid0. Raid 10 could also help here in some small cases.

Raid5 used to be the goto, but that’s changed and a lot. Raid6 is good, i.e Raidz2.

I firmly believe in open source independence and hate being tied to a hardware based raid controller for future breakdowns (it will happen) and while being up the creek without a paddle. People used to laugh at me back in 2009 when I said that software raid was the answer. mdadm and such. You can also use the power of ZFS like freenas has.

Yes Freenas is a PITA and has it’s quirks (and security problems) etc that I am not a huge fan of, but when understood and done right (even done manually – my approach) you can not beat ZFS, not ever, not even a little bit. But if you choose to do that, stick to the BSD Platform, not the poor Linux ports etc. With a few commands and a bash script or two, I can do most things ZFS via a simple terminal. You can too if you spend a bit of time to RTFM. In my opinion BSD is easier to manage than your standard Linux Distro. And more Secure. What do you think FreeNAS and Pfsense are built on? :shockerface: BSD/UNIX

Fast mechanical disks on Raid0 are fast for long sequential transfers because you can write and read from both disks at the same time. They can be faster than a single SSD, depending on the SSD in this scenario. because they have higher throughput at the bus level. (because they have two data connectors)

If the files are small, seektimes become more important, and the SSD will win, because it has no mechanical parts that move.

Raid0 is bad because it fails if any or both drives fail, and the probability of this happen, is bigger than the failure probability of any of the drives independently.

But if your data is temporal and can be recreated this is not big deal.

Depending on how much space you need, you can also use two SSDs in raid0.
You can also use ZFS instead of your Raid card. If the setup is correct, it can provide additional speed because compression.