I’ve been working on my hypothetical server – a hand-me-down Dell 2U server R520 to be specific – for at least 2 months now. As it stands it capable of holding 8 hard drives and has 1 of a possible 2 Xeon processors. And all the ingredients are finally sort of starting to come together.
First I had to replace the two 15k RPM 36 gigabyte drives with SATA drives. Then figure out how to set the BIOS/PERC card to non-RAID mode so the thing could actually see the SATA drives.
Then I bought an additional 6 drive sleds so I could actually fill the thing up with drives.
Some where along the line I upgraded it from 4 gigs of RAM to 36 gigs of RAM. That was something of an adventure in itself.
Two things have actually happened in the past week:
- I wanted to utilize all 8 slots for hard drives so I ended up ordering a CD-to-HDD adapter (the device is actually for laptops but it worked for the R520)
- I saw a really good price on 10TB hard drives – limit of 5 per order but…I guess good enough – and ordered 5x 10TB hard drives
Unfortunately these 10 TB hard drives won’t arrive until after I’ve left for my trip. But it will give me something to look forward to when I get home.
On the CD-to-HDD adapter the device itself turned out to be incredibly cheaply made and inexplicably had these two really long…I guess you call them screws…sticking into the hard drive area of the device. I still have no idea why they were there. Had to take the thing apart to unscrew them so I could fit the drive in.
Then I couldn’t get the server to see the drive. At first I thought I had broken the adapter but after some experimenting I realized the CD drive itself was disabled in the BIOS. Once I turned it on I was in fact able to get the SSD to be recognized by the system. Except the OS wouldn’t boot any more.
Since I was re-installing the OS anyway I decided to enable UEFI mode and go full-steam-ahead and damn-the-spinrite. SpinRite doesn’t do GPT partitions, only MBR/BIOS mode ya see. So I’ll never be able to run SpinRite against any of my drives again (horray). Keeping in mind 10TB drives won’t give me any choice but to be GPT.
Of course all this was just for the hardware. Haven’t even made it to the OS/software side yet…
I have to say I’m getting better at the – as I like to call it – free-as-in-beer-edition of Windows server, Hyper-V Server 2019. It’s like Linux but undocumented and with a less-then-robust default CLI. Also, the menu system is inexplicably written in VBScript.
Anyway, previous versions of Hyper-V Server I had a lot of difficulty with setting up. I mean generally, just RDP, C$ shares and running server manager and Hyper-V manager against it. It would take me an incredibly long time to get even those four things working. And those are really the four most basic features needed to even start using the thing. But the 2019 version is much easier along with learning some tricks to make setup easier.
I’ll probably do a write up on the commands I used, but that post is for later. Right now I have to figure out how I want to configure the server for day to day use. I mean once I choose a configuration it will be difficult to impossible to change to something else. So it’s probably good to choose a configuration and stay with it.
I was experimenting for a while with the drives I have while I wait for the 10TB ones. Using the Server Manager I tried different stripes and configurations to create “virtual disks” for the storage of eventual virtual machines.
I also have the option of using the SAS/PERC/whatever card to create the storage pool of drives. That way I wouldn’t need to worry about Windows configuration at all since Windows would just see a big storage device. Wouldn’t help if the card went bad but then I think I’d lose the array if that happened anyway. I mean in theory I could recover the array if it was a software-based windows thing. I mean if I took the drives and put them in another PC also running Hyper-V 2019. Or just exported the VMs as a regular thing.
As for what I’m actually going to do with this machine and it’s storage I’m still working that out. I mean probably I’ll use as a storage server. That seems pretty obvious. But do I want to store my data directly on this array created by the R520 motherboard or do I want to store the data in a virtualized Server 2019 using virtual hard disk files?
The most obvious answer is to have the files in virtual hard drive files in a Server 2019 virtual machine as these VHDX files – so far as I know – can be mounted in windows 10 as storage devices via disk manager (also diskpart on the CLI). If I went that way I could easily do things like make backup copies of the VHDX file and copy data both manually and in an automated way to other VHDX files. And Hyper-V is supposed to have this “live migration” feature for moving virtual machines from one host to another without having to power off the VM. I don’t know why I would need to do that it just sounds really great. More want than need I think is what I’m saying.
So virtualization seems obvious. Just need to know what kind of OS I want to use and how much RAM each VM gets. If I use server 2019 as a domain controller than it would probably get most of the RAM. It seems 36 gigs of RAM sounds like a lot but I think can be used up pretty quick.