I started writing this in July but am publishing it in September. I realize not all of below seems accurate but I was just learning about Proxmox.
After a mostly unsuccessful attempt at using my Dell R520 with Hyper-V Server 2019 as a hypervisor, I’ve officially run out of patients with both the OS and myself. So I’ve decided to swap hypervisors to Proxmox, which seems to have so many more containers, documentation and videos than Hyper-V anyway.
To install it I simply removed the SSD that has Hyper-V on it and put in a fresh 2.5″ SSD I had forgotten I had purchased (it was still in the package). I installed the OS from a thumb driver which was incredibly straight forward.
As an aside…
Actually, I took an embarrassingly long time with some drywall anchors to set up a mount for an old 24 inch TV just so the server would have a screen in it’s new home in a different room. I spent all that time mounting the TV and then I had the OS installed inside of 15 minutes and I was greeted with a message to open a browser to a specific URL. So it’s likely it will be a very long time before I even need that screen again. But I am glad I mounted a screen there in case it’s needed.
Back on subject, I wasn’t sure what to go with for my storage pool. But seeing ZFS as an option and being at least casually familiar with ZFS concepts from about 2010 or so I choose ZFS. There is a “one gig ram for every 10 gigs of storage” rule with ZFS (or was anyway). If that rule still holds I do actually have enough RAM to use my entire collection of 8×10 gig hard drives. So I just went with a regular old RAID Z which sounds a lot like the equivalent of RAID 5 e.g. the array can with stand one drive failure and keep going. In probably a “thick” provisioned array. Which is really all I wanted.
I’ve only just started experimenting with Proxmox and virtual machines. It is quite easy to figure out how to make one though. The thing I really expected to be more complex was creating the “virtual switch” like I did with Hyper-V. Well I used the web UI instead of the command line but the virtual switch didn’t seem that bad. Really all I wanted was to have one of the R520 NICs dedicated to managing the host machine and the other just dedicated to the virtual machines. And besides not understanding some fundamentals I think I succeeded in this. In that one of the ports is for the web UI access and the other is for the VMs. I’m not sure this has really accomplished anything other than segmenting the host from the VMs since it’s all one network anyway.
References and Resources
- My most recent prior post on status of server: Lets try this again
- An explanation of Hyper-V networking thanks to altaro.com
- Hyper-V Server 2019 post about setting up storage pool
- Hyper-V server 2019 post about first-setup, networking, RDP etc
- Day 31 of my 100 days of code has some information (day 32 has some info too)
- I also wrote a post about the R520 in particular
- General information on deploying storage spaces (MS Docs)
- PowerShell reference for disk-related tasks (MS Docs)