Menu
There is definately a problem with the serial port passthrough. I found several bugs reported on the KVM mailiung list but no solution yet. Like mentioned in another thread USB passthrough does not work with all devices. I would not recommend proxmox with serious hardware passthrough requirements.
Balloon: 2048bootdisk: virtio0cores: 2ide2: local:iso/debian-8.8.0-amd64-netinst.iso,media=cdromkeyboard: svmemory: 6144name: OMVnet0: virtio=9A:8F:3C:80:9A:4B,bridge=vmbr0numa: 0onboot: 1ostype: otherscsi2: /dev/disk/by-id/ata-WDCWD40EFRX-68N32N0WD-WCC7K2HRAYS5,size=Kscsi3: /dev/disk/by-id/ata-WDCWD40EFRX-68N32N0WD-WCC7K2AAV0V5,size=Kscsihw: virtio-scsi-pcismbios1: uuid=d5aabc1c-c6d5-4033-b47ad23sockets: 1usb0: host=1-10,usb3=yesusb1: host=2-4,usb3=yesvirtio0: local:100/vm-100-disk-1.qcow2,size=31G. Iscsi initiator-name=iqn.1993-08.org.debian:01:82e57f89a6bb-device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5-drive file=/dev/disk/by-id/ata-WDCWD40EFRX-68N32N0WD-WCC7K2HRAYS5,if=none,id=drive-scsi2,format=raw,cache=none,aio=native,detect-zeroes=on-device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2-drive file=/dev/disk/by-id/ata-WDCWD40EFRX-68N32N0WD-WCC7K2AAV0V5,if=none,id=drive-scsi3,format=raw,cache=none,aio=native,detect-zeroes=on-device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=3,drive=drive-scsi3,id=scsi3. Hi,thanks for the reply.My config works for passthrough, the disks shows up as /dev/sda and /dev/sdb in the vm, its just that smart doesnt work.
The disks have the name QEMU HARDDISK instead of the real name WDC WD40EFRX-68N32N0 (see below) i guess that LUN passthrough will enable smart access to the disks from the VM.The LUN passthrough is possible with the added parameter: device='lun' I just dont know if or how I can add that to my config.ovirt dotorg/develop/release-management/features/storage/virtio-scsi/About PCI passthrough, is it feasable to move the controller as I only want two of my six sata disks to be passed on to this VM?lspci shows me this, must be the 00:17:0 SATA controller.
How did you know this was being passed through to freebsd base guest;)I haven't gotten a chance to play with FreeNAS and it comes pretty well recommended and hey who doesn't like a good GUI especially with darkmode.I have 8 drives on the HBA which are all being passed through to the guest, so I can pass through either all 8 drives (plus an additional drive connected to the onboard SATA3 connector) or simply pass through the entire PCI HBA card (plus again the single drive connected to the onboard SATA port). Only thing being with PCI passthrough I have been reading it requires IOMMU to be enabled which may be a bit more than I am willing to get into currently. Especially if it requires more work on the front end with little to no performance benefits or otherwise.Only thing I could thing of in this case would be PCI passthrough would make things ever so slightly easier when a drive should fail in the future? One could simply replace the drive and the guest should immediately pick it up and be ready to rebuild the array, whereas I would have to remove the previous disk and replace it with the ID of the new disk on the host to then be seen by the VM? Not much of a concern for me as this is a homelab and support is easy, but possibly the benefit would be there for supporting something more remote?.
I'm also passing through disks to freeNAS (3x3TB +1ssd cache), so I think I can speak to this.The thing FreeNAS doesn't like about sharing with Proxmox is ZFS. The rule of thumb is that each implementation of ZFS needs direct hardware access to the pool members. That's the only real motivator to passing through disks to FreeNAS.I've been through both scenarios, having passed the controller and disks to FreeNAS, and there is no real performance advantage to either. FreeNAS fully supports the LSI2008 HBA, but it's just as happy with access to block devices.IOMMU is a must for me, because I passthrough a dedicated pcie dual gigabit NIC to pfsense, but passing through pcie devices isn't that big a deal, just a few configs and a reboot. The only time it gets weird is with video cards, and the weirdness is in the VM, not the host.As for making the drives hot-swappable to replace failed pool members (I think you're describing this?), you won't be able to do this with SATA drives, but you could potentially do this with SAS drives. In that scenario, yes, you would want to passthrough the entire controller, because you need to issue NCQ commands to bring up and down disks, which requires access to the controller.
I'm also passing through disks to freeNASYeah, I have been reading up on ZFS in general and this is part of the reason I flashed the card to IT mode so ZFS can get direct access to the disks rather than any other type of RAID getting in the way. Thought about implementing ZFS from within Proxmox and creating shares that way but again I haven't gotten a chance to play with FreeNAS and I'm a sucker for a GUI.IOMMU sounds like something I might end up needing to enable then at some point if that is the case. I do eventually plan to install a GPU and pass it through to a Windows VM to do some testing for gaming and then possibly check my options for using it as a remote render server as well.I passthrough a dedicated pcie dual gigabit NIC to pfsenseSince we are on the topic I too want to implement a pfSense VM but was unsure if it was best to passthrough my quad pro/1000 VT to said guest or not.
I am setting up an LACP bond to my main switch. I was recommended to create the bond on the host and then bridge an interface to be set as the defaults for my VMs.
Is there any specific reason you passthrough the entire NIC to pfSense? To enable hardware offloading? If this is the case then do your other guests route through pfSense and if so how is the configuration setup? Very interested to hear the different options out there.As for making the drives hot-swappable to replace failed pool members (I think you're describing this?), you won't be able to do this with SATA drives, but you could potentially do this with SAS drives. In that scenario, yes, you would want to passthrough the entire controller, because you need to issue NCQ commands to bring up and down disks, which requires access to the controller.Sorry if that last point was a bit confusing.
While the LSI2008 card does support SAS drives I don't have any or really have a need for them. I don't mean in the case of hot-swappable drives so much more over that it may be easier to shut down the host. Swap a drive, and with PCI passthrough on the HBA the guest should automatically detect the replaced drive and be ready to rebuild the array in FreeNAS. Whereas I would imagine if I passthrough the drives by device ID, should a drive fail I would then need to edit the config on the host to change the drive ID from the failed drive to the new drive, THEN the guest would be able to be started and see the new drive no? Little bit more work in that case (nothing really crazy though) I just meant might make more sense in that case if the drives had to be replaced by say someone less tech savvy. Is there any specific reason you passthrough the entire NIC to pfSense?Giving pfsense access to the NIC allows pfsense to offload a lot of the heavy lifting of slinging packets from the CPU to the NIC, whose hardware is setup to do so.
I did some tests when I was setting it up and my cpu usage on full bore 325MBps speed tests went from 15% to next to 0. As I mentioned elsewhere in these comments, it's not necessary to give pci passthrough access to the whole PCIe device, you can take advantage of the hw acceleration by just specifying hostpciX: bus:enumerator,bus:enumerator(,etc).Another thing you'll want to do if you virtualize pfsense is set the CPU type to 'host', which gives pfsense access not only to a couple cores/threads, but also to the CPU's extensions. The important bit about that is AES-NI, which helps quite a bit with VPN acceleration. Again, without the AES-NI acceleration, the virtual cpu does the heavy lifting, so load on system is reduced if you do this.
Host cpu is not an IOMMU-dependant feature, you can use this in any KVM.Yep, your scenario of passing through the HBA would allow the VM to auto-detect a new device. I misunderstood your comment initially.I think you're on a good path, I was in your shoes at one point, and let me tell you, since I've virtualized all the things, I would never go back. PCI passthrough isn't very scary and it works really well. I'm planning a very similar setup.I currently have my FreeNAS virtualized, but I'm not 100% sure if I have my disks passed correctly.
It seems to work fine though, but FreeNAS recognzies the disks as what looks like virtual disks, but in the VM's conf file I did the 'qm set' command for each drive by ID (I can get the exact config later)My potential issue arises when I think about how I'm going to virtualize my PFSense instance. I have a dedicated 4 port gigabit NIC I plan to pass through, but I have an E3-1285 v4 processor which seems to not support ACS for IOMMU separation. (per ) I'm not sure if this will be a probelm or not and haven't had the time to mess with setting up a PFSense VM. Dual L5640 Xeons, 104gb ram. Root@pve:# cat /etc/pve/qemu-server/100.confboot: dcbootdisk: ide1cores: 4hostpci0: 09:00.0ide1: local-zfs:vm-100-disk-1,size=16Gmemory: 16384name: FreeNASnuma: 0ostype: othersata0: /dev/disk/sata1: /dev/disk/sata2: /dev/disk/scsihw: virtio-scsi-pcismbios1: uuid=2b89d643-32b5-4a29-8efb-32857819a545sockets: 2vmgenid: 01aa5be2-a033-4381-8f6d-9e6c803b9247edit: I should emphasize that identifying disks by UUID usually leads to more intuitive identification when a disk fails and you have to pull it from ZFS in the guest.Edit2: did you want the pfsense conf?
Root@pve:# cat /etc/pve/qemu-server/102.confbootdisk: ide0cores: 2cpu: hosthostpci0: 08:00.0;08:00.1ide0: local-zfs:vm-102-disk-1,size=8Gide2: none,media=cdrommemory: 4096name: pfsense2.3.4p1numa: 0onboot: 1ostype: otherparent: upgrade245scsihw: virtio-scsi-pcismbios1: uuid=bbf51b51-b93f-43d3-8692-4ea8e178d44esockets: 2tablet: 0You can see I just pass the two pci devices 08:00.0 and 08:00.1 individually to the vm, I don't need to give it the entire pcie device. This is a characteristic of an intel e1000-based NIC, the hardware extensions are available to each interface, which is nice.
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
March 2023
Categories |