qemu-system-x86_64: invalid accelerator nvmm. Is this a bug ?
marietto2008 at gmail.com
Fri Dec 31 17:32:18 PST 2021
Yeah,I tried to join the IRC channel several times,but after some hours I
left the channel because no one gave support. It seemed to me that the
channel was full of bot and not humans. Anyway,I posted the question on
different channels,such as ML and Reddit,but in all these cases,I didn't
get some support,I should be honest. There is an update for this problem.
I've got qemu from here : fetch
and I tried again :
-machine type=q35,accel=nvmm \
-smp cpus=4 -m 8G \
-device virtio-blk-pci,drive=disk0 \
-netdev user,id=net0,hostfwd=tcp:127.0.0.1:6022-:22 \
-device virtio-net-pci,netdev=net0 \
-object rng-random,id=rng0,filename=/dev/urandom \
-device virtio-rng-pci,rng=rng0 \
-display curses \
but I've got this output :
qemu-system-x86_64: NVMM: Mem Assist Failed [gpa=0xfffffff0]
qemu-system-x86_64: NVMM: Failed to execute a VCPU.
Abort trap (core dumped)
VGA Blank mode
Il giorno sab 1 gen 2022 alle ore 00:54 Aaron LI <aly at aaronly.me> ha
> Hi Mario,
> On Dec 31, 2021, at 23:51, Mario Marietto <marietto2008 at gmail.com> wrote:
> For sure I don't want to bother you,but after having posted this question
> in several places and no one replied (like IRC,the ML,or Reddit or the
> UnitedBSD forum),
> I didn’t seem to see this question on the #dragonflybsd IRC and this
> mailing list.
> I've thought of trying this unorthodox method,just because...I need
> support to understand the reasons for this error. I also think that by
> helping me,you are also helping the other DFLY lovers. Thanks for the
> understanding. Taking in consideration that NVMM has been ported from
> NetBSD and on the NetBSD this same error has been fixed,as you can see
> below :
> it seems that the fix has not been ported to DFLY,so for this reason we
> can't use NVMM.
> Sure. I noticed such a fix to Qemu upstream some time ago.
> But well, we can’t directly use the official unmodified version. The
> DPorts version should be used and there are several necessary patches
> (ideally we’d better push to upstream, but that takes time and efforts).
> It looks to me you were using the Qemu build from upstream rather than
> installed via pkg(8). If yes, then that’s the issue, and you just switch to
> our version and should just work.
> So,this is what I did :
> I'm trying to test qemu and nvmm on :
> DragonFly marietto 6.1-DEVELOPMENT DragonFly
> v188.8.131.523.gfca8e8-DEVELOPMENT #0: Wed Dec 22 09:11:32 CET 2021
> marietto at marietto:/usr/obj/usr/src/sys/X86_64_GENERIC x86_64
> first of all I added these users on the nvmm group :
> root at marietto:/home/marietto # pw groupmod nvmm -m marietto
> root at marietto:/home/marietto # pw groupmod nvmm -m root
> then,I've launched this vm :
> qemu-system-x86_64 \
> -machine type=q35,accel=nvmm \
> -smp cpus=4 -m 8G \
> -drive if=pflash,format=raw,readonly=on,file=/usr/local/share/uefi-edk2-qemu/QEMU_UEFI_CODE-x86_64.fd \
> -drive if=pflash,format=raw,file=/usr/local/share/uefi-edk2-qemu/QEMU_UEFI_VARS-x86_64.fd \
> -drive file=/mnt/dragonfly-ufs/bhyve/impish-cuda-11-4-nvidia-470.img,if=none,id=disk0 \
> -device virtio-blk-pci,drive=disk0 \
> -netdev user,id=net0,hostfwd=tcp:127.0.0.1:6022-:22 \
> -device virtio-net-pci,netdev=net0 \
> -object rng-random,id=rng0,filename=/dev/urandom \
> -device virtio-rng-pci,rng=rng0 \
> -display curses \
> WARNING: Image format was not specified for '/mnt/dragonfly-ufs/bhyve/impish-cuda-11-4-nvidia-470.img' and probing guessed raw. Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted. Specify the 'raw' format explicitly to remove the restrictions.
> unfortunately I get this error,that it seems a bug to me :
> qemu-system-x86_64: invalid accelerator nvmm
> just because nvmm works great :
> root at marietto:/home/marietto/Desktop # nvmmctl identify
> nvmm: Kernel API version 3nvmm: State size 1008nvmm: Comm size 4096nvmm: Max machines 128nvmm: Max VCPUs per machine 128nvmm: Max RAM per machine 127Tnvmm: Arch Mach conf 0nvmm: Arch VCPU conf 0x3<CPUID,TPR>nvmm: Guest FPU states 0x3<x87,SSE>
> The NVMM part looks good.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Users