Why not Raid 0?

I got a Dell R610 where I tried to put some SSD’s in, but the raid controller in it didn’t work that well with them.

Instead I decided to just use two 15k rpm SAS disks, and then have them in raid 0 to give me both some performance and capacity.

The data on the raid 0 is just some temp. data, which is only on the raid0 because it is a bit faster to work with there, compared to the 5400 rpm disk it came from. The worst thing happening if the raid0 loses all its data, is that the server has to copy the data over again from the slow disk, and then it will continue to work as before.

Still I am told raid 0 is stupid to use, and I should use ssd. I can’t get an answer to why I should not use raid 0, when I don’t really care about the data on it. I could understand it if it was important data, but this isnt.

So does anyone here have a reason to why it is bad to use raid 0 to get some more performance and capacity out of two disks, while the data on it can be lost without any issue?

Theres absolutely no reason why you cant use them. Most people recommend to go down the SSD route since if speed is the primary concern ; SSD is generally going to be more than enough.

I think people don’t want to recommend RAID 0 is because of the increased risk of data loss should a drive fail in the array, and they maybe don’t want anything coming back on them. But as you mention, the data isn’t important, so as long as the system admin is aware of the risk, who cares, use RAID 0 if you see fit.

In fact I manage 15 servers, each with RAID 0 arrays at work. Why? The only data on the array is temporary cache data, where speed is more important than retaining anything. We have separate systems that contain the important stuff.

1 Like

Hell I actually use ramdisk for SQL data dump of small databases before compressing and shipping for archival, that’s technically worse that raid 0, but it doesn’t matter, it’s all about the solution you want to roll out based on the importance of the data

As long as you understand the limitations and risks, there are certainly use cases for RAID0.

I used to use ramdisks for temp storage, worked well. Currently setting up a Raid0 NVME for testing with windoze, should be fun. I’m a believer in Raid0 for certain things, Raid1 or 10 for others, and really anything other than a Raid5 for everything else. Raid5 IMHO is worse than a Raid0 as it’s a false sense of security.

My plan was to use raid 5 for storage with 4 drives. It should allow one disk to fail, doesent it?

Also I didnt mention this before, this is just my home server. Nothing I run on it is mission critical in any way, and absolutely nothing on it is used in any professional way.

Everything on it can be lost without it being the end of the world, it would just be annoying.

Raid 5 allows a single drive failure yes, however the issues arise when resilvering the array without any failsafe.

If everything can be lost without any real loss other than time, than in your stated use case it’s an acceptable risk.

Way back in the day I was bitten by Raid5 and vowed never again. I run Z2 or Z3 arrays now and no longer sweat it, other than the cost at times, lol.

What happened ? Raid 5 should have been stable enough in most circumstances.

Umm, that should have been an obvious observation? With Raid 5, if you loose another drive while resilvering the array, your done for and loose everything.

Well yeah I thought something else besides the stress taking out the drive and hence whole array. I thought some other unknown reason maybe

Torn on what to do…

I have ordered a H310 flashed to work in IT, but not sure if I should try it or not… The problem is the way unraid handle the disks. One entire file is put on one disk. So when reading it again, only one disk will be read from. If that disk is also the one unraid decides to write a new file to, the disk has to both read and write at the same time, which gives me very bad disk speeds.

Having the raid controller handle all the drives, makes it so I hardly notice if I write a file while also reading.

Using the H310 would make me able to use an ssd for cache, but comparing a VM running from the ssd, to a VM running on the two 15k sas disks, I don’t really see any difference inside the VM.

I also tried Freenas, as it seems to have other options for making a disk array, but in unraid there are so many apps which is installed with just a few clicks, and just works right away. It is easy to map various network shares, and storing files in memory is super easy too. In Freenas I found that ex plex is treated more like a vm, so it isnt that easy for it to access system resources. On unraid I just point plex to /tmp and then it transcode into /tmp/Transcode, which is in ram. To do that in Freenas, the guides all said to create a ram drive at the size needed, which involved messing around with fstab, and same when mapping different shares to it too.

You’ve seen the differences.

FreeNAS is built for speed and security at the expense of ease of use. Unraid is made to be easy to use and not much else, plus costs money.

These days when you have software raid that’s just as fast as hardware raid, when implemented correctly, there’s no reason to use hardware raid as if the card dies, your screwed.