I mentioned in my prior post (updates to that post below) I had to re-install the Hyper-V server OS. This did indeed go smoothly and has worked since then. The main problem I have with the OS this time is very intermittent errors with connecting to it with Hyper-V manager. Server Manager works, WAC works including for VMs but Hyper-V Manager, a majority of the time, does not work. This is annoying.
Besides that a few other things have changed:
- I now have 8x10TB drives in the server
- I upgraded to 128 gigs of RAM (I’m so glad 2013 Mac users are slow to upgrade…)
- I have installed an upgraded CPU
- If you’re curious: Intel Xeon E5-2420 V2 SR1AJ 6-Core 2.2GHz 15MB LGA 1356 Processor “renewed”, $40US shipped
The 128gigs of RAM was nice but the CPU is what I think I really needed and what I think will really help.
The Xeon theR520 came with is I think the lowest end version of Xeon this particular version of the R520 offered: a non-hyperthreaded quadcore E5 v1. I upgraded to a E5 v2 6-core with with hyper-threading.
What I learned right before upgrading to the 6 core is that I can assign as many VMs as I want all four cores, only limited by the acceptable performance of the VMs. I also realized I could have assigned those Win 10 installs 24 gigs of RAM and the four cores during install then scaled them back to 2 cores and 8 gigs of RAM. If I had wanted.
Speaking of which…I did the exact thing I didn’t want to do: created one big RAID 0 striped array and a simple (non-thin provisioned in other words) volume. Which is good because I have lots of storage (~70TBs) but bad because if one drive dies the entire array is gone (as is my understanding). And this a storage space soft-raid system if that wasn’t clear.
Thanks to the simple volume, an installation of a Windows 10 VM went from ~60 minutes down to ~14 minutes. And that was before the CPU and RAM upgrade. Although I’m not sure I’ll be able to beat that record as at some point the mechanical SATA drives are the bottleneck.
I was actually thinking about creating 4x RAID 1 mirror virtual disks then striping them together. I think that’s RAID 10. That would effectively cut the storage space in half but at least I would be a little more resilient in case of failures. No idea if this would affect performance or not.
The last piece of the hardware half is RAID controller in this R520: the Dell Perc H310 mini. The answer to whether it’s possible to flash this card to “IT Mode” seems be ambiguous at best. This mini version is the “special” version of this card that is more attached to the motherboard, it’s not a PCI card in the traditional form factor. Which is why so much of the web says attempting to flash it results in bricking the card. There is one document I found – a PDF only accessible via the way back machine – that reports as successful. I’m not sure after all this money I’ve put into the server I actually want to risk bricking my RAID card. Nor am I sure if the thing will even POST if it finds an effectively bricked Perc card.
I don’t think I’ve mentioned IT mode before: it’s something of a custom firmware that takes the BIOS of a RAID card – usually an LSI chipset – and over-writes it so the card is passing along the the SATA drives directly to the OS.
With traditional RAID cards there is a secondary BIOS – you see it during the PC bootup after the POST screen – that has its own setup utility to set RAID with the connects hard drives. Some NAS-oriented operating systems – FreeNAS and forks/derivatives being the most obvious examples – have their own way of setting up and creating stripes and mirrors of disks. By putting a RAID card into a “pass through” mode the OS can directly talk to the drives with superior performance. There’s also supposed to some otherwise limited settings that can be adjusted with IT mode (I want to say queue settings but I may not be remembering that right).
Anyway, since I am not planning on using the onboard Perc setup utility to create arrays of disks any time soon I would really like to put the card into IT mode. But as I mentioned the answer on doing so ranges from “you’ll brick the card/the server won’t boot at all” to “you can via this series of obscure Linux commands and good luck”. I think I’m just going to enjoy my server for a while instead. Maybe eventually I’ll be brave enough to try it.
So the R520 hardware-wise is now fully setup and operational. And I have something of a grip on the software side as far as networking and figuring how to setup and configure virtual machines. There’s really only two things really holding me back: still not wanting the whole array to die if one drive goes regardless of backup solution… and some kind of backup solution.
I’m actually working on a second PC for the back solution. I haven’t figured out if I want it to be a second VM server sort-duplicate of the R520 or if I want to just use FreeNAS or similar to simply store a copy of the data.
I can always use a third box for a simply data copy because having a second actual VM server would kind of instructive for seeing if I could do a “live migration” between physical host hardware.
I don’t actually have enough spare hard drives for a full back up if I came even close to filling that 70TBs of capacity of course. It would be nice to at least have some kind of a copy of the data though.
Direct updates to the prior post
I wanted to mention some place a few things I was off on in the previous post.
First the ATA/AHCI settings I was looking were actually for the SATA ports directly on the motherboard that are independent of those on the Perc RAID card. The SATA port that normally connects to the optical drive or in my case the SSD I’m using for a boot device in other words.
Also, I created some dummy VMs, took the drives offline and used a DBAN ISO to erase all the disks. Well just up to 1 or 2% actually. The 10TB drives would take a while to fully erase with DBAN and I just wanted them erased enough I could “start over”.
References and Resources
- My most recent prior post on status of server: Lets try this again
- An explanation of Hyper-V networking thanks to altaro.com
- Hyper-V Server 2019 post about setting up storage pool
- Hyper-V server 2019 post about first-setup, networking, RPD etc
- Day 31 of my 100 days of code has some information (day 32 has some info too)
- I also wrote a post about the R520 in particular
- General information on deploying storage spaces (MS Docs)
- PowerShell reference for disk-related tasks (MS Docs)