After being very frustrated with the performance of the combination of virtual machines and my drive pool, I decided to re-install Hyper-V server 2019.
Before doing so however I decided it would be a good idea to go through and record some settings in case I needed to go back to a prior configuration. It seems odd that there is no way to back up BIOS/other firmware settings for a server (I’ve seen this feature on consumer level desktop PCs going back to 2002). Or perhaps there is a way I am unaware of.
In any case I took pics with my phone of as many different screens as possible so I would have some kind of reference in case I needed to go back.
I then changed a number of settings including switching from UEFI to BIOS mode. I still left the drive settings set to “AHCI” instead of ATA because I’m almost positive AHCI will provide superior bandwidth on the SATA bus. Or whatever. I should perhaps mention the other motivation for switching to BIOS mode: I haven’t yet figured out how to make a custom bootable thumb drive that boots in UEFI mode. So in some way I may not have had a choice.
I got all my settings up to the way I wanted them then took out the SSD I was using for the OS and inserted a different SSD for the fresh install. This is so I could perhaps switch back to the other SSD in case I missed a setting or change some place. Although I found where Windows stores the command history for PowerShell so this will probably not be necessary.
Since I have been experimenting with integrating drivers with Windows install sources/ISOs – something I should do a separate post on – I happen to already have an ISO with the R520 drivers integrated into it.
I still had a couple of issues however: first I had some trouble finding a thumb drive that would be found by the R520 and actually boot. Then I had to make sure the drive was in the proper USB port and properly set to be bootable. Finally I got Hyper-V 2019 setup to boot up and what’s the first thing that happens? Can’t find the hard drives.
As it turns out I integrated the drivers into install.wim but not into boot.wim. The boot.wim being the setup pre-install environment (PE) where things like drives have to first be detected, separate from the later stages of the installation.
So I used the same method as before to integrated all the drivers into boot.wim and tried again: there were still some minor glitches (setup being confused by my unattend.xml file, for instance) but eventually I managed to install the OS. I haven’t made it very far into setting the OS up post-install but I have already noticed hardware that would not have otherwise been there is already present and installed.
So the good news is I now have a thumb drive dedicated entirely to installing Hyper-V 2019 with the R520 drivers already present. It’s too early to tell if there will be further good news after this point.
Of other more miscellany note is the fact that before giving up on the prior OS install, I had dismantled the drive array of 8 drives then created a 3 drive parity array. Then tried a quick just-through-hyperv-manager attempt to make a virtual machine. Just see what would happen. I don’t know how that new setup would have worked because this OS had only SCSI as the hard drive type and when I tried to install an OS (Win 10) the setup couldn’t find the virtual disk. Which seems like an odd error I still have no idea how to fix. I couldn’t add an IDE controller as it was not an option. This is probably because I chose the “version 2” virtual machine option in the VM creation wizard.
Frustrated with that result, I then went to delete this VM but also got an error when trying to do this: the VM was not fully powered off and therefore could not be deleted.
I only mention this little anecdote because the install of Hyper-V still seems to see the 3 drive array I had created. Even though I had reset everything else and changed (I think?) the drive types or at least how the drives should be detected by the OS. Somehow the OS sees the array.
So what I’m starting to think of doing is disconnecting the OS drive, setting the drives to ATA mode for compatibility and DBAN’ning the whole bunch of them. Or I could do this through the host and VMs that access the individual drives. Not sure which would be better frankly. This DBAN operation I’m fairly certain would take quite a lot of time.
These are new drives. I could use this as something of a stress test on them.
References and Resources
- An explanation of Hyper-V networking thanks to altaro.com
- Hyper-V Server 2019 post about setting up storage pool
- Hyper-V server 2019 post about first-setup, networking, RPD etc
- Day 31 of my 100 days of code has some information (day 32 has some info too)
- I also wrote a post about the R520 in particular
- General information on deploying storage spaces (MS Docs)
- PowerShell reference for disk-related tasks (MS Docs)