From olekstysh at gmail.com Thu Apr 14 20:03:51 2022 From: olekstysh at gmail.com (Oleksandr Tyshchenko) Date: Thu, 14 Apr 2022 20:03:51 -0000 Subject: [Rust-VMM] [Stratos-dev] Xen Rust VirtIO demos work breakdown for Project Stratos In-Reply-To: <1d6382b6-ddf8-494c-4f7b-afc50a4269a4@gmail.com> References: <87pmsylywy.fsf@linaro.org> <874ka68h96.fsf@linaro.org> <1d6382b6-ddf8-494c-4f7b-afc50a4269a4@gmail.com> Message-ID: Hello all. [Sorry for the possible format issues] I have an update regarding (valid) concern which has been also raised in current thread which is the virtio backend's ability (when using Xen foreign mapping) to map any guest pages without guest "agreement" on that. There is a PoC (with virtio-mmio on Arm) which is based on Juergen Gross? work to reuse secure Xen grant mapping for the virtio communications. All details are at: https://lore.kernel.org/xen-devel/1649963973-22879-1-git-send-email-olekstysh at gmail.com/ https://lore.kernel.org/xen-devel/1649964960-24864-1-git-send-email-olekstysh at gmail.com/ -- Regards, Oleksandr Tyshchenko -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.bennee at linaro.org Fri Apr 15 09:23:20 2022 From: alex.bennee at linaro.org (Alex =?utf-8?Q?Benn=C3=A9e?=) Date: Fri, 15 Apr 2022 09:23:20 -0000 Subject: [Rust-VMM] [Stratos-dev] Xen Rust VirtIO demos work breakdown for Project Stratos In-Reply-To: References: <87pmsylywy.fsf@linaro.org> <874ka68h96.fsf@linaro.org> <1d6382b6-ddf8-494c-4f7b-afc50a4269a4@gmail.com> Message-ID: <87pmlisnst.fsf@linaro.org> Oleksandr Tyshchenko writes: > Hello all. > > [Sorry for the possible format issues] > > I have an update regarding (valid) concern which has been also raised in current thread which is the virtio backend's ability (when using > Xen foreign mapping) to map any guest pages without guest "agreement" on that. > There is a PoC (with virtio-mmio on Arm) which is based on Juergen Gross? work to reuse secure Xen grant mapping for the virtio > communications. > All details are at: > https://lore.kernel.org/xen-devel/1649963973-22879-1-git-send-email-olekstysh at gmail.com/ > https://lore.kernel.org/xen-devel/1649964960-24864-1-git-send-email-olekstysh at gmail.com/ Thanks for that. I shall try and find some time to have a look at it. Did you see Viresh's post about getting our rust-vmm vhost-user backends working on Xen? One thing that came up during that work was how guest pages are mapped into the dom0 domain where Xen needs to use kernel allocated pages via privcmd rather than then normal shared mmap that is used on KVM. As I understand it this is to avoid the situation where dom0 may invalidate a user PTE causing issues for the hypervisor itself. At some point we would like to fix that wrinkle so we can remove the (minor) hack in rust-vmm's mmap code to be truly hypervisor agnostic. Anyway I hope you and your team are safe and well. -- Alex Benn?e From olekstysh at gmail.com Fri Apr 15 11:06:48 2022 From: olekstysh at gmail.com (Oleksandr) Date: Fri, 15 Apr 2022 11:06:48 -0000 Subject: [Rust-VMM] [Stratos-dev] Xen Rust VirtIO demos work breakdown for Project Stratos In-Reply-To: <87pmlisnst.fsf@linaro.org> References: <87pmsylywy.fsf@linaro.org> <874ka68h96.fsf@linaro.org> <1d6382b6-ddf8-494c-4f7b-afc50a4269a4@gmail.com> <87pmlisnst.fsf@linaro.org> Message-ID: <6bf0ebc5-fe3c-5c59-0427-87f02a35e7f2@gmail.com> On 15.04.22 12:07, Alex Benn?e wrote: Hello Alex > Oleksandr Tyshchenko writes: > >> Hello all. >> >> [Sorry for the possible format issues] >> >> I have an update regarding (valid) concern which has been also raised in current thread which is the virtio backend's ability (when using >> Xen foreign mapping) to map any guest pages without guest "agreement" on that. >> There is a PoC (with virtio-mmio on Arm) which is based on Juergen Gross? work to reuse secure Xen grant mapping for the virtio >> communications. >> All details are at: >> https://lore.kernel.org/xen-devel/1649963973-22879-1-git-send-email-olekstysh at gmail.com/ >> https://lore.kernel.org/xen-devel/1649964960-24864-1-git-send-email-olekstysh at gmail.com/ > Thanks for that. I shall try and find some time to have a look at it. > > Did you see Viresh's post about getting our rust-vmm vhost-user backends > working on Xen? Great work! I see the email in my mailbox, but didn't analyze it yet. I will definitely take a look at it. > > One thing that came up during that work was how guest pages are mapped > into the dom0 domain where Xen needs to use kernel allocated pages via > privcmd rather than then normal shared mmap that is used on KVM. As I > understand it this is to avoid the situation where dom0 may invalidate a > user PTE causing issues for the hypervisor itself. At some point we > would like to fix that wrinkle so we can remove the (minor) hack in > rust-vmm's mmap code to be truly hypervisor agnostic. > > Anyway I hope you and your team are safe and well. Thank you! > -- Regards, Oleksandr Tyshchenko From viresh.kumar at linaro.org Tue Apr 19 01:11:10 2022 From: viresh.kumar at linaro.org (Viresh Kumar) Date: Tue, 19 Apr 2022 01:11:10 -0000 Subject: [Rust-VMM] Virtio on Xen with Rust In-Reply-To: <20220414092358.kepxbmnrtycz7mhe@vireshk-i7> References: <20220414091538.jijj4lbrkjiby6el@vireshk-i7> <20220414092358.kepxbmnrtycz7mhe@vireshk-i7> Message-ID: +rust-vmm at lists.opendev.org On Thu, 14 Apr 2022 at 14:54, Viresh Kumar wrote: > > +xen-devel > > On 14-04-22, 14:45, Viresh Kumar wrote: > > Hello, > > > > We verified our hypervisor-agnostic Rust based vhost-user backends with Qemu > > based setup earlier, and there was growing concern if they were truly > > hypervisor-agnostic. > > > > In order to prove that, we decided to give it a try with Xen, a type-1 > > bare-metal hypervisor. > > > > We are happy to announce that we were able to make progress on that front and > > have a working setup where we can test our existing Rust based backends, like > > I2C, GPIO, RNG (though only I2C is tested as of now) over Xen. > > > > Key components: > > -------------- > > > > - Xen: https://github.com/vireshk/xen > > > > Xen requires MMIO and device specific support in order to populate the > > required devices at the guest. This tree contains four patches on the top of > > mainline Xen, two from Oleksandr (mmio/disk) and two from me (I2C). > > > > - libxen-sys: https://github.com/vireshk/libxen-sys > > > > We currently depend on the userspace tools/libraries provided by Xen, like > > xendevicemodel, xenevtchn, xenforeignmemory, etc. This crates provides Rust > > wrappers over those calls, generated automatically with help of bindgen > > utility in Rust, that allow us to use the installed Xen libraries. Though we > > plan to replace this with Rust based "oxerun" (find below) in longer run. > > > > - oxerun (WIP): https://gitlab.com/mathieupoirier/oxerun/-/tree/xen-ioctls > > > > This is Rust based implementations for Ioctl and hypercalls to Xen. This is WIP > > and should eventually replace "libxen-sys" crate entirely (which are C based > > implementation of the same). > > > > - vhost-device: https://github.com/vireshk/vhost-device > > > > These are Rust based vhost-user backends, maintained inside the rust-vmm > > project. This already contain support for I2C and RNG, while GPIO is under > > review. These are not required to be modified based on hypervisor and are > > truly hypervisor-agnostic. > > > > Ideally the backends are hypervisor agnostic, as explained earlier, but > > because of the way Xen maps the guest memory currently, we need a minor update > > for the backends to work. Xen maps the memory via a kernel file > > /dev/xen/privcmd, which needs calls to mmap() followed by an ioctl() to make > > it work. For this a hack has been added to one of the rust-vmm crates, > > vm-virtio, which is used by vhost-user. > > > > https://github.com/vireshk/vm-memory/commit/54b56c4dd7293428edbd7731c4dbe5739a288abd > > > > The update to vm-memory is responsible to do ioctl() after the already present > > mmap(). > > > > - vhost-user-master (WIP): https://github.com/vireshk/vhost-user-master > > > > This implements the master side interface of the vhost protocol, and is like > > the vhost-user-backend (https://github.com/rust-vmm/vhost-user-backend) crate > > maintained inside the rust-vmm project, which provides similar infrastructure > > for the backends to use. This shall be hypervisor independent and provide APIs > > for the hypervisor specific implementations. This will eventually be > > maintained inside the rust-vmm project and used by all Rust based hypervisors. > > > > - xen-vhost-master (WIP): https://github.com/vireshk/xen-vhost-master > > > > This is the Xen specific implementation and uses the APIs provided by > > "vhost-user-master", "oxerun" and "libxen-sys" crates for its functioning. > > > > This is designed based on the EPAM's "virtio-disk" repository > > (https://github.com/xen-troops/virtio-disk/) and is pretty much similar to it. > > > > One can see the analogy as: > > > > Virtio-disk == "Xen-vhost-master" + "vhost-user-master" + "oxerun" + "libxen-sys" + "vhost-device". > > > > > > > > Test setup: > > ---------- > > > > 1. Build Xen: > > > > $ ./configure --libdir=/usr/lib --build=x86_64-unknown-linux-gnu --host=aarch64-linux-gnu --disable-docs --disable-golang --disable-ocamltools --with-system-qemu=/root/qemu/build/i386-softmmu/qemu-system-i386; > > $ make -j9 debball CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64 > > > > 2. Run Xen via Qemu on X86 machine: > > > > $ qemu-system-aarch64 -machine virt,virtualization=on -cpu cortex-a57 -serial mon:stdio \ > > -device virtio-net-pci,netdev=net0 -netdev user,id=net0,hostfwd=tcp::8022-:22 \ > > -device virtio-scsi-pci -drive file=/home/vireshk/virtio/debian-bullseye-arm64.qcow2,index=0,id=hd0,if=none,format=qcow2 -device scsi-hd,drive=hd0 \ > > -display none -m 8192 -smp 8 -kernel /home/vireshk/virtio/xen/xen \ > > -append "dom0_mem=5G,max:5G dom0_max_vcpus=7 loglvl=all guest_loglvl=all" \ > > -device guest-loader,addr=0x46000000,kernel=/home/vireshk/kernel/barm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \ > > -device ds1338,address=0x20 # This is required to create a virtual I2C based RTC device on Dom0. > > > > This should get Dom0 up and running. > > > > 3. Build rust crates: > > > > $ cd /root/ > > $ git clone https://github.com/vireshk/xen-vhost-master > > $ cd xen-vhost-master > > $ cargo build > > > > $ cd ../ > > $ git clone https://github.com/vireshk/vhost-device > > $ cd vhost-device > > $ cargo build > > > > 4. Setup I2C based RTC device > > > > $ echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device; echo 0-0020 > /sys/bus/i2c/devices/0-0020/driver/unbind > > > > 5. Lets run everything now > > > > # Start the I2C backend in one terminal (open new terminal with "ssh > > # root at localhost -p8022"). This tells the I2C backend to hook up to > > # "/root/vi2c.sock0" socket and wait for the master to start transacting. > > $ /root/vhost-device/target/debug/vhost-device-i2c -s /root/vi2c.sock -c 1 -l 0:32 > > > > # Start the xen-vhost-master in another terminal. This provides the path of > > # the socket to the master side and the device to look from Xen, which is I2C > > # here. > > $ /root/xen-vhost-master/target/debug/xen-vhost-master --socket-path /root/vi2c.sock0 --name i2c > > > > # Start guest in another terminal, i2c_domu.conf is attached. The guest kernel > > # should have Virtio related config options enabled, along with i2c-virtio > > # driver. > > $ xl create -c i2c_domu.conf > > > > # The guest should boot fine now. Once the guest is up, you can create the I2C > > # RTC device and use it. Following will create /dev/rtc0 in the guest, which > > # you can configure with 'hwclock' utility. > > > > $ echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device > > > > > > Hope this helps. > > > > -- > > viresh > > i2c_domu.conf > > > kernel="/root/Image" > > memory=512 > > vcpus=2 > > command="console=hvc0 earlycon=xenboot" > > name="domu" > > i2c = [ "virtio=true, irq=1, base=1" ] > > -- > viresh