From peter.maydell at linaro.org Wed May 18 11:45:00 2022 From: peter.maydell at linaro.org (Peter Maydell) Date: Wed, 18 May 2022 12:45:00 +0100 Subject: [Rust-VMM] CFP Reminder: KVM Forum 2022 Message-ID: ================================================================ KVM Forum 2022 September 12-14, 2022 Dublin, Ireland & Virtual All submissions must be received before *** Friday June 3rd, 2022 at 23:59 PDT *** ================================================================ KVM Forum is an annual event that presents a rare opportunity for developers and users to discuss the state of Linux virtualization technology and plan for the challenges ahead. This highly technical conference unites the developers who drive KVM development and the users who depend on KVM as part of their offerings, or to power their data centers and clouds. Sessions include updates on the state of the KVM virtualization stack, planning for the future, and many opportunities for attendees to collaborate. Over the years since its inclusion in the mainline kernel, KVM has become a critical part of the FOSS cloud infrastructure. Come join us in continuing to improve the KVM ecosystem. This year's event is in Dublin, Ireland, but it is a combined physical+virtual conference: both speaking and attending can be virtual if you prefer. For more details, registration, travel and health and safety information, visit: https://events.linuxfoundation.org/kvm-forum/ For more information, some suggested topics, and to submit proposals, please see: https://events.linuxfoundation.org/kvm-forum/program/cfp/ We encourage you to submit and reach out to us should you have any questions. The program committee may be contacted as a group via email: kvm-forum-2022-pc at redhat.com. Apologies from the Program Committee for not posting an announcement of the CFP to these lists sooner. From alex.bennee at linaro.org Mon May 23 10:24:36 2022 From: alex.bennee at linaro.org (Alex =?utf-8?Q?Benn=C3=A9e?=) Date: Mon, 23 May 2022 11:24:36 +0100 Subject: [Rust-VMM] vhost-device outstanding tasks Message-ID: <87zgj87alq.fsf@linaro.org> Hi, This is one of several emails to follow up on Linaro's internal KWG sprint last week in Cambridge where a number of Project Stratos hackers discussed what next steps we have and started to think about future work. I am splitting the update into several emails so I can freely CC the relevant lists for each without too much cross-posting spam. Intro ===== We've made good progress over the last year and have up-streamed a number of device models as vhost-user daemons. We have also gotten our first proof of concept build of the xen-vhost-master which has allowed us to reuse these backends on the Xen hypervisor. Outstanding work ================ vm-virtio definitions --------------------- Given our vhost-user daemons were not re-implementing existing virtio device models a number of the queue handling definitions are in the vhost-device repository itself. As discussed before now we have these working we should migrate common definitions to the vm-virtio crate so in-VMM virtio emulation can re-use this code. Get outstanding vsock PR merged ------------------------------- We actually have two outstanding PR's against the vhost-device repository which implement virtio-vsock and virtio-scsi. They were done as GSoC projects but didn't get merged at the time due to lack of review. They currently have outstanding requests for code changes but due to the nature of GSoC it looks like the original authors don't have time to make the changes which is understandable given changes the repository has gone through over the last two years. I'm agnostic about virtio-scsi but given the usefulness of virtio-vsock it seems a shame to leave an implementation to wither on a branch. There has been some work on vm-virtio to improve the queue handling and with Andreea's help I have a branch that uses that. Should we just pick up the branch and finish the pull request process? Sort out an official vhost-master repository in rust-vmm -------------------------------------------------------- The rust-vmm project has the vhost-user-backend which implements the core backend behaviour for handling vhost-user messages. There is also an abstraction for vhost (user and kernel handling) from the VMM side in the vhost repository. However it doesn't provide everything needed to implement a full vhost-master. Currently Viresh is using: https://github.com/vireshk/vhost-user-master is the xen-vhost-master project which is constructed from the in-VMM vhost-master bits from Cloud Hypervisor. We should get this properly up-streamed into the rust-vmm project. Should this be merged into the existing rust-vmm/vhost repository or does it require it's own repository? Properly document and support cross-compilation ----------------------------------------------- Currently most of our testing is on Arm systems and currently we are either: - hacking up the local repo for cross-compilation or - doing a "native" build in an QEMU emulated Aarch64 system the second option is potentially quite slow, at least for the first build. Given building backends for non-x86 systems is core to Linaro's goals we should properly support cross compilation for the vhost-device repository and document it. This should be also be enabled in the CI to ensure the configuration doesn't bitrot. See Also ======== Other subjects discussed will be the subject of other emails today with different distribution lists. These are: - Xen specific enabling work - Additional virtio devices - Integrating rust-vmm with QEMU Happy reading ;-) -- Alex Benn?e From alex.bennee at linaro.org Mon May 23 11:27:17 2022 From: alex.bennee at linaro.org (Alex =?utf-8?Q?Benn=C3=A9e?=) Date: Mon, 23 May 2022 12:27:17 +0100 Subject: [Rust-VMM] Remaining Xen enabling work for rust-vmm Message-ID: <87pmk472ii.fsf@linaro.org> Hi, This is one of several emails to follow up on Linaro's internal KWG sprint last week in Cambridge where a number of Project Stratos hackers discussed what next steps we have and started to think about future work. I am splitting the update into several emails so I can freely CC the relevant lists for each without too much cross-posting spam. Intro ===== We've made good progress over the last year and have up-streamed a number of device models as vhost-user daemons. We have also gotten our first proof of concept build of the xen-vhost-master which has allowed us to reuse these backends on the Xen hypervisor. https://github.com/vireshk/xen-vhost-master Remaining Work ============== Scope out the remainder of APIs needed for oxerun ------------------------------------------------- The current xen-vhost-master uses a combination of the native rust oxerun and a bindgen import of libxensys (https://github.com/vireshk/libxen-sys) and a number of xen libraries built directly in the xen-vhost-master repository. Our intention for the Stratos work is to remove any C dependency for the rust backend and use native rust bindings to talk to the hypervisor control ioctl. Identifying what is needed should be easy enough as we can see where in master repository C calls are being made. This work should be broken down into groups in JIRA so the work can be efficiently divided up. Currently our focus for the rust-vmm repo is to support the vhost-user daemons but a wider conversation needs to be had with the community about the rest of the tooling involved in the creation and control of DomU guests. For Stratos we would like to explore the possibilities of bare metal monitor programs for dom0-less (or dom0-light?) setups. Strategy for testing oxerun in the rust-vmm project --------------------------------------------------- Currently the rust-vmm projects rely heavily on unit tests and a (mostly) x86 build farm. While building for non-x86 architectures isn't insurmountable doing blackbox testing on real hypervisors isn't currently supported. Given the low level nature of the interactions simply mocking the ioctl interface to the kernel will not likely sufficiently exercise things. We need a way to execute tests on a real system with a real Xen hypervisor and dom0 setup. We can either: - somehow add Xen hosts to the Buildkite runner pool for rust-vmm or - investigate using QEMU TCG as a portable system in a box to run Xen and guests Currently this is blocking wider up-streaming of the oxerun code to https://github.com/rust-vmm/xen-sys in the same way other rust-vmm repos work. See also ======== Other subjects discussed will be the subject of other emails today with different distribution lists. These are: - Remaining work for vhost-device - Additional virtio devices - Integrating rust-vmm with QEMU Happy reading ;-) -- Alex Benn?e From fungi at yuggoth.org Tue May 31 15:17:15 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 31 May 2022 15:17:15 +0000 Subject: [Rust-VMM] Moderation queue flushed Message-ID: <20220531151714.22g7beno4ili2uho@yuggoth.org> Just a heads up that I flushed some several dozen messages which were waiting for a moderator to approve, so if you received a sudden flood of messages for the ML (some dating back to February), that's why. The majority were held because they were being sent from or cross-posted by people who aren't subscribed to this list, or because they had very large recipient lists (>10 addresses), or were over the default 40KB size limit. On a related note, if anyone is interested in helping check the moderation queue for this list regularly, please let me know. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: