Multiple RAID configurations on Dell R710

I have a Dell R710, which has eight SFF HDD slots. When I originally purchased this, which was retired by its original owners, it came with no hard drives. I purchased two 1 TB SAS drives, which I set up in a RAID 1 configuration. I have installed XCP-ng on this machine and have several VMs running on it. I am now wishing to expand my storage so I purchased a lot of (10) 1 TB SAS drives (I wanted to have spares, especially considering these were “seller refurbished” drives).

I just received the drives and I’m currently in the process of installing the bare drives into the trays. I won’t have the ability to install the drives for another two days, due to work constraints, but I just got to thinking and wondering if I’ll be able to set it up the way I am planning. My intention is to keep the first two drives in their RAID 1 configuration, with XCP-ng and the VMs intact, I then plan on filling the remaining 6 tray slots with drives and set them up as a separate RAID 10 configuration.

Most of my VMs do not require large amounts of storage and 1 TB is easily sufficient for the hypervisor, VMs and the storage from all but one of my VMs. The one VM that will require large amounts of storage is the one running ZoneMinder, a video surveillance system. The actual program will be fine on that first 1TB system, but I want to have these new drives be used to store video files. I’ve debated between a RAID 5 and RAID 10 and although RAID 5 will be more efficient in storage capacity, I’m thinking RAID 10 will give me better read/write performance.

I just had a thought though that I am hoping is unfounded and hoping someone here could let me know, but I was worried that the RAID controller only wants a single RAID per controller and not two or more RAID coniguations on the controller.

It depends on the raid card that you have, but the higher end one will support multiple raid groups like you want to do.

Thanks Tom. I guess I have the PERC 6i controller. I see I could upgrade it to an H700. If I were to upgrade, I’m guessing that my original RAID 1 configuration would be destroyed… is that true? Is there anyway to retain the contents of the original RAID 1 configuration? It appears that PERC 6i has a slower interface than my drives, which are Seagate Constellation.2 1TB 6Gbp/s SAS drives.

If upgrading the controller would destroy the first array, I’d need to re-build my XCP-ng server. If I exported the VMs to an external storage device, would it be a simple import of the VMs from the external drive to restore them? Also, if I needed to start from scratch, would it be best to go with an 8-drive RAID 10 vs a 2-drive RAID 1 and a 6-drive RAID 10? I kind of like the idea of my surveilance storage being on a different array than the other VMs.

I don’t think you can copy the data from old raid controller to the new one so I would recommend reloading. You can just back up the VM’s and re-import. Putting XCP-NG on a mirror and the other on RAID 10 makes sense to me.

I decided to order the H700 controller and shocked by how low the price was with Server Monkey. I have never worked with them before but they had the card with 512 MB memory for $10 or as a kit, including two SAS cables and new battery for $25. I ended up calling Server Monkey when I didn’t see the kit initially and I got a knowledgeable guy on the phone within the first ring. I was amazed at how easy it was. Provided shipping is timely and products work as they are supposed to, I would give them a good recommendation.

I just watched a video (on YouTube) of someone upgrading their R710 controller from 6/i to an H700 and upon boot-up, the controller recognized a previous RAID configuration on the pre-existing drives. I’m keeping my fingers crossed that this will happen for me, although I will export all my VMs before doing all this.

1 Like

I just thought, in case anyone was curious, I would give a follow-up… just in case someone in the future is trying to research what I just went through I guess.

To be safe, I backed up all of my VMs. One particular VM took about 12 hours to export! I went to bed last night with it still exporting. This morning, I came out and it was finished. I shut down my system and opened it up. Physically, the job was relatively easy. Thankfully, Dell made it so it does not require any more tools than your fingers. I pulled out the fans (just two latches). Pulled out the cover to the RAM/CPU. I noticed that there was a plate protecting the SAS cables and battery cable going from the RAID controller to the front of the case… this was a simple press of a blue lever to unlatch. It took a little effort to get the PERC 6/i card out but eventually it came out. It took a little effort to align everything to get the new H700 card to go in, but alas, that was done too. Before putting in the card, I plugged in the two new SAS cables and the battery cable. I then routed these cables up and made sure SAS A was plugged into A and B to B… the cables were not labeled in my kit. I secured the new battery in the slot designed to hold the battery… I’m not sure if there is anything different between the old battery and the new one, but I installed a new one anyway. I then put everything else back the way it was supposed to go and turned on the machine.

Upon booting the machine, it immediately recognized that I had changed the RAID controller from a PERC 6/i to an H700. It also recognized my original RAID 1 configuration. I did NOT yet add any additional hard drives yet, but wanted to see if XCP-NG would boot; it did, just as if nothing else had changed.

I then loaded the remaining six slots with 1TB 6Gbps SAS drives. I rebooted the machine and entered the RAID controller setup. I was able to create a new Virtual Disk and select RAID 10 with three sets of two drives. This gives me ~2.7 TB of data. It took a while to initialize the disks, but I took off to renew my driver’s license at the DMV, which took me three hours… not due to long lines but rather the staff didn’t know what they were doing… it was all new to them; a frustration I go through every 5 years. Anyway, when I got back from the DMV, I was able to restart my R710. I wish it were easier to add the new virtual drive from within XCP-ng Center’s GUI, but I followed instructions I found online to add the drive as a new storage device.

Once I added the storage to the main server, I tried to add a new storage drive to a VM. This is when I ran into an odd issue. I decided I wanted to give the VM 2 TebiBytes of storage, out of an allowed 2.7. I started walking it down until if finally allowed me to allocate 1 TebiByte. I then started my VM and created an entry in /etc/fstab to mount the new drive. When I attempted to perform a “mount -a”, it mentioned that the drive was not formated. I decided that the easiest way to deal with that would be to boot up the VM with a GParted ISO in the virtual DVD drive. I previously had two virtual hard drives on this VM, so this new virtual drive was “/dev/xvdc”. I tried to format the drive but it said there was no partition table. I had to go into “Device” (in GParted) and create a new partition table. I was given many choices of partition tables, defaulting to “msdos.” As I’m using Ubuntu Server 2.0.4 on this, I figured “msdos” was a bit antiquated and did a little research. I ended up choosing “gpt” as the partition table. Once formatting the new primary partition as “ext4”, I was able to exit and reboot.

The VM I’m using this on is running ZoneMinder. I ended up leaving all three virtual drives intact. I have 30 GB for the system, 100 GB for video prior to the addition of the new RAID 10 array and now a 1TB drive. For each physical camera in my video system, I have a main, high-quality, video feed and a lower-quality sub-stream that I use for motion detection settings. I call these sub-streams my “trigger” feeds. I am now storing my “trigger” videos to the 100 GB drive and the high-quality videos to the new larger one.

1 Like