The Hyper-V Amigos ride again! In this episode (19) we discuss some testing we are doing to create high performant backup targets with Storage Spaces in Windows Server 2019. We’re experimenting with stand-alone Mirror Accelerated Parity with SSDs in the performance tier and HDDs in the capacity tier on a backup target. We compare backs via the Veeam data mover to this repository directly as well as via an SMB 3 file share. We look at throughput, latency and CPU consumption.
One of the questions we have is whether an offload card like SolarFlare would benefit backups as these offload not just RDMA capable workloads. The aim is to find how much we can throw at a single 2U backup repository that must combine both speed and capacity. We discuss the reasons why we are doing so. For me, it is because rack units come at a premium price in various locations. This means that spending money to come up with repository building blocks that offer performance and capacity in fewer rack units ensure we spend the money where it benefits us. If the number of rack units (likely) and power (less likely) are less of a concern the economics are different.
-->My 'Server' is running Hyper-V Server 2012. I am using the built in hyper-v MSC console on Windows 8.1 to control the server. The virtual client hosted on the 2012 server is Windows 8.1 as well. In this case, I can double click the VM in the hyper-v console and get any resolution I want, including 1920X1080 without using RDP. When trying to Enabling Physical GPUs in Hyper-V, the option is not there in the GUI. In Windows 2016 there is an option as in the below screenshot: In Windows Server 2019, here is the screenshot and there is no option to enable it: So, Here is how to enable it in Windows server 2019. You can also use RemoteFX vGPU feature on Windows Server 2019. A SCSI card’s BIOS doesn’t do anything good or bad for a running Hyper-V host, but it slows down physical boot times. That’s a Microsoft SQL Server 2000 design. This is 2019 and you’re building a Hyper-V server. Use all the bays in one big array. Memory Settings for Hyper-V Performance. There isn’t much that you can do for.
Applies To: Windows Server 2019, Hyper-V Server 2019, Windows Server 2016, Hyper-V Server 2016, Windows Server 2012 R2, Hyper-V Server 2012 R2, Windows 10, Windows 8.1
The following feature distribution map indicates the features in each version. The known issues and workarounds for each distribution are listed after the table.
Hyper V Server 2019 Sd Card Reader
Table legend
Microsoft Hyper-v Server 2019
- Built in - BIS (FreeBSD Integration Service) are included as part of this FreeBSD release.
- ✔ - Feature available
- (blank) - Feature not available
Feature | Windows Server operating system version | 12-12.1 | 11.1-11.3 | 11.0 | 10.3 | 10.2 | 10.0 - 10.1 | 9.1 - 9.3, 8.4 |
---|---|---|---|---|---|---|---|---|
Availability | Built in | Built in | Built in | Built in | Built in | Built in | Ports | |
Core | 2019, 2016, 2012 R2 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Windows Server 2016 Accurate Time | 2019, 2016 | ✔ | ✔ | |||||
Networking | ||||||||
Jumbo frames | 2019, 2016, 2012 R2 | ✔ Note 3 | ✔ Note 3 | ✔ Note 3 | ✔ Note 3 | ✔ Note 3 | ✔ Note 3 | ✔ Note 3 |
VLAN tagging and trunking | 2019, 2016, 2012 R2 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Live migration | 2019, 2016, 2012 R2 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Static IP Injection | 2019, 2016, 2012 R2 | ✔ Note 4 | ✔ Note 4 | ✔ Note 4 | ✔ Note 4 | ✔ Note 4 | ✔ Note 4 | ✔ |
vRSS | 2019, 2016, 2012 R2 | ✔ | ✔ | ✔ | ||||
TCP Segmentation and Checksum Offloads | 2019, 2016, 2012 R2 | ✔ | ✔ | ✔ | ✔ | ✔ | ||
Large Receive Offload (LRO) | 2019, 2016, 2012 R2 | ✔ | ✔ | ✔ | ✔ | |||
SR-IOV | 2019, 2016 | ✔ | ✔ | ✔ | ||||
Storage | Note1 | Note 1 | Note 1 | Note 1 | Note 1 | Note 1,2 | Note 1,2 | |
VHDX resize | 2019, 2016, 2012 R2 | ✔ Note 6 | ✔ Note 6 | ✔ Note 6 | ||||
Virtual Fibre Channel | 2019, 2016, 2012 R2 | |||||||
Live virtual machine backup | 2019, 2016, 2012 R2 | ✔ | ✔ | |||||
TRIM support | 2019, 2016, 2012 R2 | ✔ | ✔ | |||||
SCSI WWN | 2019, 2016, 2012 R2 | |||||||
Memory | ||||||||
PAE Kernel Support | 2019, 2016, 2012 R2 | |||||||
Configuration of MMIO gap | 2019, 2016, 2012 R2 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Dynamic Memory - Hot-Add | 2019, 2016, 2012 R2 | |||||||
Dynamic Memory - Ballooning | 2019, 2016, 2012 R2 | |||||||
Runtime Memory Resize | 2019, 2016 | |||||||
Video | ||||||||
Hyper-V specific video device | 2019, 2016, 2012 R2 | |||||||
Miscellaneous | ||||||||
Key/value pair | 2019, 2016, 2012 R2 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ Note 5 | ✔ |
Non-Maskable Interrupt | 2019, 2016, 2012 R2 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
File copy from host to guest | 2019, 2016, 2012 R2 | |||||||
lsvmbus command | 2019, 2016, 2012 R2 | |||||||
Hyper-V Sockets | 2019, 2016 | |||||||
PCI Passthrough/DDA | 2019, 2016 | ✔ | ✔ | |||||
Generation 2 virtual machines | ||||||||
Boot using UEFI | 2019, 2016, 2012 R2 | ✔ | ✔ | |||||
Secure boot | 2019, 2016 |
Notes
- Suggest to Label Disk Devices to avoid ROOT MOUNT ERROR during startup.
- The virtual DVD drive may not be recognized when BIS drivers are loaded on FreeBSD 8.x and 9.x unless you enable the legacy ATA driver through the following command.
- 9126 is the maximum supported MTU size.
- In a failover scenario, you cannot set a static IPv6 address in the replica server. Use an IPv4 address instead.
- KVP is provided by ports on FreeBSD 10.0. See the FreeBSD 10.0 ports on FreeBSD.org for more information.
- To make VHDX online resizing work properly in FreeBSD 11.0, a special manual step is required to work around a GEOM bug which is fixed in 11.0+, after the host resizes the VHDX disk - open the disk for write, and run “gpart recover” as the following.
Additional Notes: The feature matrix of 10 stable and 11 stable is same with FreeBSD 11.1 release. In addition, FreeBSD 10.2 and previous versions (10.1, 10.0, 9.x, 8.x) are end of life. Please refer here for an up-to-date list of supported releases and the latest security advisories.