Hi all,
How interested would you be in patches for Unix family operating systems
other than Linux? I'm working on a prototype lightweight VMM on the
Illumos Kernel and Bhyve ioctl interface, and I'm planning to reuse
rust-vmm libraries where I can. I prefer to work upstream on shared
libraries, but I totally understand if the focus here is Linux, and
patches for other Unix family operating systems are undesirable.
As a concrete example of how non-disruptive this can be, the patch I'm
debating …
[View More]whether to submit is a 4 line substitution in vmm-sys-util,
which changes [#cfg(unix)] to [#cfg(linux)]. Right now, those lines are
checking for target_family = "unix", but the code in question is
actually unique to Linux, and fails to compile on other Unix family
operating systems (mostly due to type errors, and using features that
only exist on Linux), so it really should be checking target_os =
"linux" instead. The specific changes are lines 8, 17, and 19, in
src/lib.rs, which determine whether to use the code in the
architecture-specific 'unix' directory, and whether to use the bitflags
library (which is only used by the 'unix' directory). And, line 183 in
src/errno.rs testing a very specific string message as the text for
libc::EBADF, which has slightly different wording on other Unix family
operating systems.
That simple substitution makes it possible to use both vmm-sys-util and
vm-memory on Illumos (since vm-memory was only using
architecture-independent parts of vmm-sys-util for temp files in tests,
and nothing in the 'unix' directory). I haven't started systematically
testing other rust-vmm libraries for non-Linux compatibility yet, but I
expect I'll find similar patterns.
(Full disclosure: I personally prefer Linux Kernel and KVM, but this is
paid work, and sparks enough intellectual curiosity for me to be willing
to do it.)
Thanks for any thoughts,
Allison
[View Less]
Just a heads up that I flushed some several dozen messages which
were waiting for a moderator to approve, so if you received a sudden
flood of messages for the ML (some dating back to February), that's
why. The majority were held because they were being sent from or
cross-posted by people who aren't subscribed to this list, or
because they had very large recipient lists (>10 addresses), or were
over the default 40KB size limit.
On a related note, if anyone is interested in helping check the
…
[View More]moderation queue for this list regularly, please let me know.
--
Jeremy Stanley
[View Less]
Hi,
This is one of several emails to follow up on Linaro's internal KWG
sprint last week in Cambridge where a number of Project Stratos hackers
discussed what next steps we have and started to think about future
work. I am splitting the update into several emails so I can freely CC
the relevant lists for each without too much cross-posting spam.
Intro
=====
We've made good progress over the last year and have up-streamed a number
of device models as vhost-user daemons. We have also gotten …
[View More]our first
proof of concept build of the xen-vhost-master which has allowed us to
reuse these backends on the Xen hypervisor.
https://github.com/vireshk/xen-vhost-master
Remaining Work
==============
Scope out the remainder of APIs needed for oxerun
-------------------------------------------------
The current xen-vhost-master uses a combination of the native rust
oxerun and a bindgen import of libxensys
(https://github.com/vireshk/libxen-sys) and a number of xen libraries
built directly in the xen-vhost-master repository.
Our intention for the Stratos work is to remove any C dependency for the
rust backend and use native rust bindings to talk to the hypervisor
control ioctl.
Identifying what is needed should be easy enough as we can see where in
master repository C calls are being made. This work should be broken
down into groups in JIRA so the work can be efficiently divided up.
Currently our focus for the rust-vmm repo is to support the vhost-user
daemons but a wider conversation needs to be had with the community
about the rest of the tooling involved in the creation and control of
DomU guests. For Stratos we would like to explore the possibilities of
bare metal monitor programs for dom0-less (or dom0-light?) setups.
Strategy for testing oxerun in the rust-vmm project
---------------------------------------------------
Currently the rust-vmm projects rely heavily on unit tests and a (mostly) x86
build farm. While building for non-x86 architectures isn't
insurmountable doing blackbox testing on real hypervisors isn't
currently supported. Given the low level nature of the interactions
simply mocking the ioctl interface to the kernel will not likely
sufficiently exercise things.
We need a way to execute tests on a real system with a real Xen
hypervisor and dom0 setup. We can either:
- somehow add Xen hosts to the Buildkite runner pool for rust-vmm
or
- investigate using QEMU TCG as a portable system in a box to run Xen
and guests
Currently this is blocking wider up-streaming of the oxerun code to
https://github.com/rust-vmm/xen-sys in the same way other rust-vmm repos
work.
See also
========
Other subjects discussed will be the subject of other emails today with
different distribution lists. These are:
- Remaining work for vhost-device
- Additional virtio devices
- Integrating rust-vmm with QEMU
Happy reading ;-)
--
Alex Bennée
[View Less]
Hi,
This is one of several emails to follow up on Linaro's internal KWG
sprint last week in Cambridge where a number of Project Stratos hackers
discussed what next steps we have and started to think about future
work. I am splitting the update into several emails so I can freely CC
the relevant lists for each without too much cross-posting spam.
Intro
=====
We've made good progress over the last year and have up-streamed a number
of device models as vhost-user daemons. We have also gotten …
[View More]our first
proof of concept build of the xen-vhost-master which has allowed us to
reuse these backends on the Xen hypervisor.
Outstanding work
================
vm-virtio definitions
---------------------
Given our vhost-user daemons were not re-implementing existing virtio
device models a number of the queue handling definitions are in the
vhost-device repository itself. As discussed before now we have these
working we should migrate common definitions to the vm-virtio crate so
in-VMM virtio emulation can re-use this code.
Get outstanding vsock PR merged
-------------------------------
We actually have two outstanding PR's against the vhost-device
repository which implement virtio-vsock and virtio-scsi. They were done
as GSoC projects but didn't get merged at the time due to lack of
review. They currently have outstanding requests for code changes but
due to the nature of GSoC it looks like the original authors don't have
time to make the changes which is understandable given changes the
repository has gone through over the last two years.
I'm agnostic about virtio-scsi but given the usefulness of virtio-vsock
it seems a shame to leave an implementation to wither on a branch.
There has been some work on vm-virtio to improve the queue handling and
with Andreea's help I have a branch that uses that. Should we just pick
up the branch and finish the pull request process?
Sort out an official vhost-master repository in rust-vmm
--------------------------------------------------------
The rust-vmm project has the vhost-user-backend which implements the
core backend behaviour for handling vhost-user messages. There is also
an abstraction for vhost (user and kernel handling) from the VMM side in
the vhost repository. However it doesn't provide everything needed to
implement a full vhost-master. Currently Viresh is using:
https://github.com/vireshk/vhost-user-master
is the xen-vhost-master project which is constructed from the in-VMM
vhost-master bits from Cloud Hypervisor. We should get this properly
up-streamed into the rust-vmm project.
Should this be merged into the existing rust-vmm/vhost repository or
does it require it's own repository?
Properly document and support cross-compilation
-----------------------------------------------
Currently most of our testing is on Arm systems and currently we are
either:
- hacking up the local repo for cross-compilation
or
- doing a "native" build in an QEMU emulated Aarch64 system
the second option is potentially quite slow, at least for the first
build. Given building backends for non-x86 systems is core to Linaro's
goals we should properly support cross compilation for the vhost-device
repository and document it. This should be also be enabled in the CI to
ensure the configuration doesn't bitrot.
See Also
========
Other subjects discussed will be the subject of other emails today with
different distribution lists. These are:
- Xen specific enabling work
- Additional virtio devices
- Integrating rust-vmm with QEMU
Happy reading ;-)
--
Alex Bennée
[View Less]
================================================================
KVM Forum 2022
September 12-14, 2022
Dublin, Ireland & Virtual
All submissions must be received before
*** Friday June 3rd, 2022 at 23:59 PDT ***
================================================================
KVM Forum is an annual event that presents a rare opportunity for
developers and users to discuss the state of Linux virtualization
technology and plan for the challenges ahead. This highly technical
conference …
[View More]unites the developers who drive KVM development and the
users who depend on KVM as part of their offerings, or to power
their data centers and clouds. Sessions include updates on the state
of the KVM virtualization stack, planning for the future, and many
opportunities for attendees to collaborate. Over the years since
its inclusion in the mainline kernel, KVM has become a critical part
of the FOSS cloud infrastructure. Come join us in continuing to
improve the KVM ecosystem.
This year's event is in Dublin, Ireland, but it is a combined
physical+virtual conference: both speaking and attending can be
virtual if you prefer. For more details, registration, travel
and health and safety information, visit:
https://events.linuxfoundation.org/kvm-forum/
For more information, some suggested topics, and to submit
proposals, please see:
https://events.linuxfoundation.org/kvm-forum/program/cfp/
We encourage you to submit and reach out to us should you have any
questions. The program committee may be contacted as a group via
email: kvm-forum-2022-pc(a)redhat.com.
Apologies from the Program Committee for not posting an
announcement of the CFP to these lists sooner.
[View Less]