[Rust-VMM] [Rust-vmm] Goals for this list

Boeuf, Sebastien sebastien.boeuf at intel.com
Thu Dec 20 07:50:32 UTC 2018


On Thu, 2018-12-20 at 09:55 +0800, Jiang Liu wrote:


On Dec 20, 2018, at 9:40 AM, Jiang Liu <liuj97 at gmail.com<mailto:liuj97 at gmail.com>> wrote:



On Dec 20, 2018, at 7:17 AM, Boeuf, Sebastien <sebastien.boeuf at intel.com<mailto:sebastien.boeuf at intel.com>> wrote:

On Wed, 2018-12-19 at 15:02 -0800, Dylan Reid wrote:
On Wed, Dec 19, 2018 at 2:59 PM Steve Rutherford <srutherford at google.
com> wrote:


Hint received :)
Getting userspace instruction emulation to work with nested has
been a
bit of a fight, and we've been waiting to push stuff upstream until
we
have it everywhere.
The crosvm team is currently investigating moving the PIT and APIC
emulation to user space. If that works, instruction emulation will be
next on the list, including helping to get the kernel side landed
upstream.


That's nice! Quick question, why do you want to emulate the PIT? If you
emulate APIC feature X86_FEATURE_TSC_DEADLINE_TIMER, the legacy timer
emulation should not be required, right?
Oh I guess that's because some other architectures might need the PIT.

Are you planning to make this modular so that we could choose to pick
only the APIC emulation?

Yeah, APIC deadline timer could be used here so we could remove PIT, even
PIC and Local APIC. We have done a quick PoC by using NetBSD and uKVM.
I think it should work with Linux too.
Sorry, “local APIC” should be “IO APIC”, we still use the in-kernel Local APIC
for MSI and IPI.

Yes, if you don't expect any pin based interrupt, then removing the IOAPIC and the PIC should also work just fine and the APIC (local APIC) itself should be enough.
Talking about linux, if you take a look here: https://github.com/torvalds/linux/blob/master/arch/x86/kernel/apic/apic.c#L818, you see that having the deadline timer would skip the whole calibration of the APIC timer, hence this will prevent from falling back onto the legacy timers (https://github.com/torvalds/linux/blob/master/arch/x86/kernel/apic/apic.c#L840-L842) which are specifically expected to be a PIT or a HPET.






On Wed, Dec 19, 2018 at 1:13 PM Paolo Bonzini <pbonzini at redhat.com<mailto:pbonzini at redhat.com>>
wrote:


On 18/12/18 00:30, Liguori, Anthony wrote:

As a side note, I think having OS X hypervisor framework
bindings
and whatever the new Windows thing is would be pretty cool.
Yes, indeed.  Hypervisor.framework however is much more complex
than KVM
or WHP because you deal manually with VMCSes and have to do
instruction
emulation in userspace.  QEMU takes a stab at it, but it's not as
stable
as KVM.

*However* Google does have patches for KVM to do instruction
emulation
in userspace, and I'd like to apply them upstream too now that
KVM has
an API test framework (and thus we can know they won't bitrot).
(Steve, you are in Cc because hint, hint :)).  Once that is in
place, I
guess a minimal x86 emulator written in Rust, porting the
emulator code
that QEMU has for Hypervisor.framework, would be a fun GSoC
project for
a very good student.


2) The crosvm data_model crate.  This one is super critical but
easy to
misunderstand as it allows for safe access to volatile
memory.  Somewhat
related is the mmap() bits from sys_util.  Not sure how the
crosvm folks
feel but I think there is some refactoring here that could be
useful to
build a memory crate.

3) Some traits for device model implementations.  It's easy to
really
bike shed here so I reckon it's best to start with a concrete
device
model like a UART, work through what is required for
interfaces, and
then iterate from there.

4) Common device models with only a single implementation (i.e.
16650A).  Not sure about virtio, maybe.
virtio would be interesting.  One initial target could be a demo
vhost-user client, it has to set up a memory map, parse vrings,
handle
endianness, etc.  It would be an interesting benchmark for a DMA
API.

The control plane (your item 3) by comparison is a bit less
interesting.

Paolo
_______________________________________________
Rust-vmm mailing list
Rust-vmm at lists.opendev.org<mailto:Rust-vmm at lists.opendev.org>
http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
_______________________________________________
Rust-vmm mailing list
Rust-vmm at lists.opendev.org<mailto:Rust-vmm at lists.opendev.org>
http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm
_______________________________________________
Rust-vmm mailing list
Rust-vmm at lists.opendev.org<mailto:Rust-vmm at lists.opendev.org>
http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opendev.org/pipermail/rust-vmm/attachments/20181220/e81dc29d/attachment-0001.html>


More information about the Rust-vmm mailing list