From marcandre.lureau at gmail.com Mon Feb 4 10:55:18 2019 From: marcandre.lureau at gmail.com (=?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?=) Date: Mon, 4 Feb 2019 11:55:18 +0100 Subject: [Rust-VMM] [RFC, WIP] rust implementation of ghost-user protocol In-Reply-To: <0FB344A6-C13A-40B6-B2E0-C4C9551590B3@linux.alibaba.com> References: <0FB344A6-C13A-40B6-B2E0-C4C9551590B3@linux.alibaba.com> Message-ID: Hi On Fri, Jan 25, 2019 at 6:31 AM Liu Jiang wrote: > > Hi all, > As we have discussed, community members have showed interests in rust implementation of the ghost-user protocol. It happens that we are working on implementing ghost-user protocol, but still in very early stage. > I think it would be better to share our work with community as early as possible so we could cooperate on the design and implementation. > The overall idea is to implement a rust crate for vhost-user protocol, and then extends the vhost driver in crosvm/firecracker to add a thin wrapper support both vhost (kernel) and vhost-user backends. The crate is in very early stage which only implements the skeleton and basic commands, so a long todo list: > 1) support dirty page log > 2) support live migration > 3) support IOMMU/IOTLB > 4) better documentation > 5) more unit test cases > > I have hosted the crate at my personal GitHub repository at https://github.com/jiangliu/vhostuser_rs and hope it could be hosted by the rust-vmm project eventually. > Any comments, suggestions and PRs are welcomed! I had a quick look. Nice work Liu, it looks like a very good start! I hope I can start using & contributing to it soon. Is there any user of the crate? In particular, do you have code to manipulate the virtio rings? thanks! -- Marc-André Lureau From liuj97 at gmail.com Mon Feb 4 15:26:46 2019 From: liuj97 at gmail.com (Liu Jiang) Date: Mon, 4 Feb 2019 23:26:46 +0800 Subject: [Rust-VMM] [RFC, WIP] rust implementation of ghost-user protocol In-Reply-To: References: <0FB344A6-C13A-40B6-B2E0-C4C9551590B3@linux.alibaba.com> Message-ID: > On Feb 4, 2019, at 6:55 PM, Marc-André Lureau > wrote: > > Hi > > On Fri, Jan 25, 2019 at 6:31 AM Liu Jiang > wrote: >> >> Hi all, >> As we have discussed, community members have showed interests in rust implementation of the ghost-user protocol. It happens that we are working on implementing ghost-user protocol, but still in very early stage. >> I think it would be better to share our work with community as early as possible so we could cooperate on the design and implementation. >> The overall idea is to implement a rust crate for vhost-user protocol, and then extends the vhost driver in crosvm/firecracker to add a thin wrapper support both vhost (kernel) and vhost-user backends. The crate is in very early stage which only implements the skeleton and basic commands, so a long todo list: >> 1) support dirty page log >> 2) support live migration >> 3) support IOMMU/IOTLB >> 4) better documentation >> 5) more unit test cases >> >> I have hosted the crate at my personal GitHub repository at https://github.com/jiangliu/vhostuser_rs and hope it could be hosted by the rust-vmm project eventually. >> Any comments, suggestions and PRs are welcomed! > > I had a quick look. Nice work Liu, it looks like a very good start! I > hope I can start using & contributing to it soon. > > Is there any user of the crate? In particular, do you have code to > manipulate the virtio rings? Hi Marc, Glad to know it’s useful and look forward to cooperate on the vhost-user cate. I’m trying to enable support of vhost-user in firecracker, but there are still some enhancements to firecracker needed. So I plan to do it in steps: 1) enhance firecracker to support memfd/hugetlbfs based guest memory. Please refer to https://github.com/firecracker-microvm/firecracker/pull/914 for detail. 2) implement a simple net or blk driver to verify vhost-user crate basic functionality. 3) use dpdk-ovs as net backend. Once it works, it should work with most dpdk/spdk based backend, and the vhost-user crate should be mature enough then. Any suggestions on cooperation are welcomed! Thanks, Gerry > > thanks! > > > > -- > Marc-André Lureau -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Tue Feb 5 19:39:48 2019 From: claire at openstack.org (Claire Massey) Date: Tue, 5 Feb 2019 13:39:48 -0600 Subject: [Rust-VMM] Schedule Bi-Weekly Calls In-Reply-To: References: <8DC3B91E-4CF3-40BC-92FD-C4E7B302794A@openstack.org> Message-ID: <84FBBC7A-AEA0-4427-BB68-CEBD45B8EA7C@openstack.org> Hi all, Reminder - the bi-weekly Rust-VMM calls start tomorrow, February 6, at 8:00am PT. Please add topics to the agenda: https://etherpad.openstack.org/p/rust_vmm_2019_biweekly_calls We’ll use this Zoom for the call: https://zoom.us/j/181523033 Thanks, Claire > On Jan 29, 2019, at 9:49 AM, Claire Massey wrote: > > Hi everyone, > > Thanks for taking the poll. > > Wednesday at 8:00am pacific is the time that works best for everyone. We’ll start the bi-weekly calls next week on February 6. > > I’ll send a meeting invite to the ML, but here’s the info for the call. Please feel free to go ahead and add topics to the agenda. > > Notes / Agenda: https://etherpad.openstack.org/p/rust_vmm_2019_biweekly_calls > > Zoom: https://zoom.us/j/181523033 > > Thanks, > Claire > > >> On Jan 28, 2019, at 10:53 AM, Claire Massey > wrote: >> >> Hi everyone, >> >> There’s some interest in holding bi-weekly calls to maximize opportunity for collaboration around Rust-VMM topics and discuss project activities that are in flight. >> >> Please take this poll to find a time that works best for everyone: https://framadate.org/1corf6IQFWCGZrmT >> >> We plan to start hosting the calls next week, in February. >> >> Thanks, >> Claire >> >> >> >> _______________________________________________ >> Rust-vmm mailing list >> Rust-vmm at lists.opendev.org >> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fandree at amazon.com Thu Feb 7 12:28:55 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Thu, 7 Feb 2019 12:28:55 +0000 Subject: [Rust-VMM] kvm ioctls crate - meeting follow-up Message-ID: <1549542534346.33977@amazon.com> Hey, As I said in the meeting yesterday, I start working on a kvm ioctls crate. But while working on this I noticed that I can't just merge the functionality of the two kvm crates in CrosVM and Firecracker because I first need to adjust the code a bit to make it more generic. Some examples of things that need to be changed in order to have one crate that accommodates both projects: - in the Kvm constructor we should pass the open flags as parameters. In CrosVM the KVM fd is open with close on exec (O_CLOEXEC). In Firecracker we can't use that flag because when Firecracker is started using the jailer, the jailer is responsible for opening /dev/kvm. The file descriptor is passed on to the firecracker process. - in Firecracker we have a hardcoded configuration of the pit in the create_pit method. There are probably some other small nits here and there, but I didn't got the chance to look at all the code. I would also like to use std::io::Error instead of the implementation of raw errors from sys_util/src/errno.rs because the io::Error has a to_string implementation that is mapping the raw os error number to a human readable text. Is there a reason why you decided to go for a custom implementation of io::Error in crosvm? Also an ultra ultra nit: I am making create_vm a method of Kvm instead of having a Vm constructor which has as parameter the Kvm file descriptor. Same for create_vcpu. Andreea Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuj97 at gmail.com Fri Feb 8 10:17:30 2019 From: liuj97 at gmail.com (Liu Jiang) Date: Fri, 8 Feb 2019 18:17:30 +0800 Subject: [Rust-VMM] RFC: Message-ID: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> Hi all, As we have discussed during the meeting, I have created a memory-model repository under rust-vmm project and posted the initial version at https://github.com/rust-vmm/memory-model . The initial version tries to merge current code from the upstream crosvm and firecracker projects. And the most sensitive user visible change is changing from u64 to usize for memory related data fields. So please help to comment on whether this is the right way to go, and next step plan is: 1) import endian.rs from crosvm 2) add address space abstraction for virtual machine Thanks, Gerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From fandree at amazon.com Fri Feb 8 11:58:23 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Fri, 8 Feb 2019 11:58:23 +0000 Subject: [Rust-VMM] RFC: In-Reply-To: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> References: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> Message-ID: <1549627102818.84003@amazon.com> Hey, For future crates I would suggest we do not create repositories directly in the rust-vmm organization and use the review process we discussed a while back: - Create a repository on the personal profile - Request reviews for the repository using the issue template for reviews This process is also described in the community readme: https://github.com/rust-vmm/community Regarding the review, I might be able to look at the changes somewhere after Wednesday next week. Regards, Andreea ________________________________ From: Liu Jiang Sent: Friday, February 8, 2019 12:17 PM To: rust-vmm ML; Florescu, Andreea; Samuel Ortiz; Boeuf, Sebastien; Iordache, Alexandra; Dylan Reid; Dr. David Alan Gilbert Subject: [Rust-VMM] RFC: Hi all, As we have discussed during the meeting, I have created a memory-model repository under rust-vmm project and posted the initial version at https://github.com/rust-vmm/memory-model . The initial version tries to merge current code from the upstream crosvm and firecracker projects. And the most sensitive user visible change is changing from u64 to usize for memory related data fields. So please help to comment on whether this is the right way to go, and next step plan is: 1) import endian.rs from crosvm 2) add address space abstraction for virtual machine Thanks, Gerry Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuj97 at gmail.com Fri Feb 8 12:45:26 2019 From: liuj97 at gmail.com (Liu Jiang) Date: Fri, 8 Feb 2019 20:45:26 +0800 Subject: [Rust-VMM] RFC: In-Reply-To: <1549627102818.84003@amazon.com> References: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> <1549627102818.84003@amazon.com> Message-ID: Hey, Sorry for break the process. Should I delete the repository from the rust-vmm project? BTW, there may be some discussion threads when holding the library on private repository, so what’s the best practice to preserve discussions on private repository when migrating from private repository to the rust-vmm project? Thanks, Gerry > On Feb 8, 2019, at 7:58 PM, Florescu, Andreea > wrote: > > Hey, > > For future crates I would suggest we do not create repositories directly in the rust-vmm organization and use the review process we discussed a while back: > - Create a repository on the personal profile > - Request reviews for the repository using the issue template for reviews > > This process is also described in the community readme: https://github.com/rust-vmm/community > > Regarding the review, I might be able to look at the changes somewhere after Wednesday next week. > > Regards, > Andreea > > From: Liu Jiang > > Sent: Friday, February 8, 2019 12:17 PM > To: rust-vmm ML; Florescu, Andreea; Samuel Ortiz; Boeuf, Sebastien; Iordache, Alexandra; Dylan Reid; Dr. David Alan Gilbert > Subject: [Rust-VMM] RFC: > > Hi all, > As we have discussed during the meeting, I have created a memory-model repository under rust-vmm project and posted the initial version at https://github.com/rust-vmm/memory-model . > The initial version tries to merge current code from the upstream crosvm and firecracker projects. And the most sensitive user visible change is changing from u64 to usize for memory related data fields. > So please help to comment on whether this is the right way to go, and next step plan is: > 1) import endian.rs from crosvm > 2) add address space abstraction for virtual machine > Thanks, > Gerry > > Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fandree at amazon.com Fri Feb 8 13:00:35 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Fri, 8 Feb 2019 13:00:35 +0000 Subject: [Rust-VMM] RFC: In-Reply-To: References: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> <1549627102818.84003@amazon.com>, Message-ID: <1549630835042.79791@amazon.com> Hey, It's not a big deal. We can review it from rust-vmm since it is already there. I would say not to publish it as a crate yet. When migrating the repository (even if it's private) all discussions should also be migrated by default. Andreea ________________________________ From: Liu Jiang Sent: Friday, February 8, 2019 2:45 PM To: Florescu, Andreea Cc: rust-vmm ML; Samuel Ortiz; Boeuf, Sebastien; Iordache, Alexandra; Dylan Reid; Dr. David Alan Gilbert Subject: Re: [Rust-VMM] RFC: Hey, Sorry for break the process. Should I delete the repository from the rust-vmm project? BTW, there may be some discussion threads when holding the library on private repository, so what's the best practice to preserve discussions on private repository when migrating from private repository to the rust-vmm project? Thanks, Gerry On Feb 8, 2019, at 7:58 PM, Florescu, Andreea > wrote: Hey, For future crates I would suggest we do not create repositories directly in the rust-vmm organization and use the review process we discussed a while back: - Create a repository on the personal profile - Request reviews for the repository using the issue template for reviews This process is also described in the community readme: https://github.com/rust-vmm/community Regarding the review, I might be able to look at the changes somewhere after Wednesday next week. Regards, Andreea ________________________________ From: Liu Jiang > Sent: Friday, February 8, 2019 12:17 PM To: rust-vmm ML; Florescu, Andreea; Samuel Ortiz; Boeuf, Sebastien; Iordache, Alexandra; Dylan Reid; Dr. David Alan Gilbert Subject: [Rust-VMM] RFC: Hi all, As we have discussed during the meeting, I have created a memory-model repository under rust-vmm project and posted the initial version at https://github.com/rust-vmm/memory-model . The initial version tries to merge current code from the upstream crosvm and firecracker projects. And the most sensitive user visible change is changing from u64 to usize for memory related data fields. So please help to comment on whether this is the right way to go, and next step plan is: 1) import endian.rs from crosvm 2) add address space abstraction for virtual machine Thanks, Gerry Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachr at google.com Fri Feb 8 18:10:48 2019 From: zachr at google.com (Zach Reizner) Date: Fri, 8 Feb 2019 10:10:48 -0800 Subject: [Rust-VMM] RFC: In-Reply-To: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> References: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> Message-ID: On Fri, Feb 8, 2019 at 2:18 AM Liu Jiang wrote: > Hi all, > As we have discussed during the meeting, I have created a memory-model > repository under rust-vmm project and posted the initial version at > https://github.com/rust-vmm/memory-model . > The initial version tries to merge current code from the upstream crosvm > and firecracker projects. And the most sensitive user visible change is > changing from u64 to usize for memory related data fields. > On 64-bit arm devices, we usually run a 32-bit userspace with a 64-bit kernel. In this case, the machine word size (usize) that crosvm is compiled with (32-bit) isn't the same as the one the guest kernel, host kernel, hardware is using (64-bit). We used u64 to ensure that the size was always at least as big as needed. > So please help to comment on whether this is the right way to go, and next > step plan is: > 1) import endian.rs from crosvm > 2) add address space abstraction for virtual machine > Thanks, > Gerry > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien.boeuf at intel.com Fri Feb 8 19:27:13 2019 From: sebastien.boeuf at intel.com (Boeuf, Sebastien) Date: Fri, 8 Feb 2019 19:27:13 +0000 Subject: [Rust-VMM] RFC: In-Reply-To: <1549627102818.84003@amazon.com> References: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> <1549627102818.84003@amazon.com> Message-ID: <820678482317472be656cb475b8b1a1eecf5a2ee.camel@intel.com> I agree Andreea, and I would also suggest that once we agreed on the creation of a crate, and once we agreed on the global code coming from the private repo, we still need to create a clean repo under rust-vmm and start from scratch with proper PRs that can be reviewed. Starting with already something in the repo sounds wrong to me. The most intensive code reviews should really happen on the rust-vmm repos. Does that make sense? Thanks, Sebastien On Fri, 2019-02-08 at 11:58 +0000, Florescu, Andreea wrote: Hey, For future crates I would suggest we do not create repositories directly in the rust-vmm organization and use the review process we discussed a while back: - Create a repository on the personal profile - Request reviews for the repository using the issue template for reviews This process is also described in the community readme: https://github.com/rust-vmm/community Regarding the review, I might be able to look at the changes somewhere after Wednesday next week. Regards, Andreea ________________________________ From: Liu Jiang Sent: Friday, February 8, 2019 12:17 PM To: rust-vmm ML; Florescu, Andreea; Samuel Ortiz; Boeuf, Sebastien; Iordache, Alexandra; Dylan Reid; Dr. David Alan Gilbert Subject: [Rust-VMM] RFC: Hi all, As we have discussed during the meeting, I have created a memory-model repository under rust-vmm project and posted the initial version at https://github.com/rust-vmm/memory-model . The initial version tries to merge current code from the upstream crosvm and firecracker projects. And the most sensitive user visible change is changing from u64 to usize for memory related data fields. So please help to comment on whether this is the right way to go, and next step plan is: 1) import endian.rs from crosvm 2) add address space abstraction for virtual machine Thanks, Gerry Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fandree at amazon.com Fri Feb 8 22:28:21 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Fri, 8 Feb 2019 22:28:21 +0000 Subject: [Rust-VMM] RFC: In-Reply-To: <820678482317472be656cb475b8b1a1eecf5a2ee.camel@intel.com> References: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> <1549627102818.84003@amazon.com>, <820678482317472be656cb475b8b1a1eecf5a2ee.camel@intel.com> Message-ID: <1549664900231.47520@amazon.com> Sounds good. My only point in having it as an external repository was to not pollute the rust-vmm with crates that we might decide are not needed. But then we can also just remove them if we decide that rust-vmm is not the place for them. Andreea ________________________________ From: Boeuf, Sebastien Sent: Friday, February 8, 2019 9:27 PM To: liuj97 at gmail.com; Ortiz, Samuel; Florescu, Andreea; Iordache, Alexandra; rust-vmm at lists.opendev.org; dgilbert at redhat.com; dgreid at chromium.org Subject: Re: [Rust-VMM] RFC: I agree Andreea, and I would also suggest that once we agreed on the creation of a crate, and once we agreed on the global code coming from the private repo, we still need to create a clean repo under rust-vmm and start from scratch with proper PRs that can be reviewed. Starting with already something in the repo sounds wrong to me. The most intensive code reviews should really happen on the rust-vmm repos. Does that make sense? Thanks, Sebastien On Fri, 2019-02-08 at 11:58 +0000, Florescu, Andreea wrote: Hey, For future crates I would suggest we do not create repositories directly in the rust-vmm organization and use the review process we discussed a while back: - Create a repository on the personal profile - Request reviews for the repository using the issue template for reviews This process is also described in the community readme: https://github.com/rust-vmm/community Regarding the review, I might be able to look at the changes somewhere after Wednesday next week. Regards, Andreea ________________________________ From: Liu Jiang Sent: Friday, February 8, 2019 12:17 PM To: rust-vmm ML; Florescu, Andreea; Samuel Ortiz; Boeuf, Sebastien; Iordache, Alexandra; Dylan Reid; Dr. David Alan Gilbert Subject: [Rust-VMM] RFC: Hi all, As we have discussed during the meeting, I have created a memory-model repository under rust-vmm project and posted the initial version at https://github.com/rust-vmm/memory-model . The initial version tries to merge current code from the upstream crosvm and firecracker projects. And the most sensitive user visible change is changing from u64 to usize for memory related data fields. So please help to comment on whether this is the right way to go, and next step plan is: 1) import endian.rs from crosvm 2) add address space abstraction for virtual machine Thanks, Gerry Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuj97 at gmail.com Sun Feb 10 05:11:45 2019 From: liuj97 at gmail.com (Liu Jiang) Date: Sun, 10 Feb 2019 13:11:45 +0800 Subject: [Rust-VMM] RFC: In-Reply-To: References: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> Message-ID: On Feb 9, 2019, at 2:10 AM, Zach Reizner > wrote: > > On Fri, Feb 8, 2019 at 2:18 AM Liu Jiang > wrote: > Hi all, > As we have discussed during the meeting, I have created a memory-model repository under rust-vmm project and posted the initial version at https://github.com/rust-vmm/memory-model . > The initial version tries to merge current code from the upstream crosvm and firecracker projects. And the most sensitive user visible change is changing from u64 to usize for memory related data fields. > On 64-bit arm devices, we usually run a 32-bit userspace with a 64-bit kernel. In this case, the machine word size (usize) that crosvm is compiled with (32-bit) isn't the same as the one the guest kernel, host kernel, hardware is using (64-bit). We used u64 to ensure that the size was always at least as big as needed. Hi Zach, Good point. So seems that the AddressSpace abstraction may help to solve this conflict. 1) The AddressSpace represents virtual machine physical address space, which contains memory and MMIO regions. For simplicity, u64 will be used here for both 32-bits and 64-bits virtual machines. And GuestAddress should be u64 too. 2) The GuestMemory represents partial or full mapping of an AddressSpace into current process, so usize should be used here for memory related fields because they are used to save pointer/size in current process. And MemoryMapping should be usize too. What’s your thoughts? Thanks, Gerry > So please help to comment on whether this is the right way to go, and next step plan is: > 1) import endian.rs from crosvm > 2) add address space abstraction for virtual machine > Thanks, > Gerry > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuj97 at gmail.com Sun Feb 10 06:00:22 2019 From: liuj97 at gmail.com (Liu, Jiang) Date: Sun, 10 Feb 2019 14:00:22 +0800 Subject: [Rust-VMM] RFC v2: propose to host memory-model crate under the rust-vm project Message-ID: <6E8B2E46-0D98-454B-89C0-B3EF99F8098B@linux.alibaba.com> Hi all, I have posted the first version of memory-model crate under rust-vmm/memory-model, which breaks the repository inclusion process of the rust-vmm community. So I have created a personal GitHub repository (https://github.com/jiangliu/memory-model) for v2 and the rust-vmm/memory-model repository will be deleted soon. Sorry for the inconvenience! The main change from v1 to v2 is the introduction of the AddressSpace abstraction, which is used to present the physical address space of a virtual machine. An AddressSpace object contains guest memory(RAM) regions and MMIO regions for devices. There are two possible ways to make use of the memory-model crate: 1) Use the GuestMemory to represent a virtual machine address space, as it’s used currently by the firecracker and crosvm project. 2) Use the AddressSpace to represent a virtual machine address space, and build GuestMemory objects from the AddressSpace object on demand. So different permission and protection mechanisms may be applied to different regions in guest address space. For example we may protect guest kernel code region with advanced EPT permission flags. It may help to mitigate the security concerns mentioned on the last meeting. On the other hand, the memory-model crate needs to satisfy requirements from both crosvm and firecracker, and currently the most sensitive conflict is that crosvm uses u64 for memory related fields but firecracker uses usize instead. As the valid usage case Zack has mentioned: "On 64-bit arm devices, we usually run a 32-bit userspace with a 64-bit kernel. In this case, the machine word size (usize) that crosvm is compiled with (32-bit) isn't the same as the one the guest kernel, host kernel, hardware is using (64-bit). We used u64 to ensure that the size was always at least as big as needed. “ So we can’t simply replace u64 with usize. With the introduction of the AddressSpace abstraction, the proposal to solve this conflict is: 1) Use u64 for all fields related to virtual machine physical address space. Most fields of the AddressSpace and GuestAddress structure falls into this category. 2) Use usize for all fields representing address/size in current process(VMM). Most fields of the MemoryMapping and GuestMemory structure falls into this category. If the proposal is the right way to go, I will posted a v3 with the proposed solution. Thanks, Gerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbonzini at redhat.com Mon Feb 11 11:05:39 2019 From: pbonzini at redhat.com (Paolo Bonzini) Date: Mon, 11 Feb 2019 12:05:39 +0100 Subject: [Rust-VMM] RFC v2: propose to host memory-model crate under the rust-vm project In-Reply-To: <6E8B2E46-0D98-454B-89C0-B3EF99F8098B@linux.alibaba.com> References: <6E8B2E46-0D98-454B-89C0-B3EF99F8098B@linux.alibaba.com> Message-ID: <383db9b4-9113-9ed3-5d53-b68d877b06bc@redhat.com> On 10/02/19 07:00, Liu, Jiang wrote: > Hi all, > I have posted the first version of memory-model crate under > rust-vmm/memory-model, which breaks the repository inclusion process of > the rust-vmm community. So I have created a personal GitHub repository > (https://github.com/jiangliu/memory-model) for v2 and the > rust-vmm/memory-model repository will be deleted soon. Sorry for the > inconvenience! A memory model crate is certainly a very good starting point for rust-vmm, but it shouldn't include the implementation of the backend. Instead, for rust-vmm we should focus on defining common traits that can be used by any VMM. In this light, GuestMemory is composed of two parts: - a way to convert a GuestAddress to a MemoryMapping and an offset, which can be a very simple MemoryMap trait: pub trait MemoryMap { fn do_in_region(&self, guest_addr: GuestAddress, size: usize, cb: F) -> Result where F: FnOnce(&MemoryMapping, usize) -> Result; fn do_in_region_partial(&self, guest_addr: GuestAddress, cb: F) -> Result where F: FnOnce(&MemoryMapping, usize) -> Result; } This can be implemented with linear lookup as is currently the case in firecracker, or it could use a binary search or a radix tree. rust-vmm shouldn't care. - the convenience API to access memory as slices/streams/objects. This part of the API is shared by MemoryMapping and GuestMemory: // From MemoryMapping pub fn read_to_memory(&self, mem_offset: usize, src: &mut F, count: usize) -> Result<()> where F: Read; // From GuestMemory pub fn read_to_memory(&self, guest_addr: GuestAddress, src: &mut F, count: usize) -> Result<()> where F: Read; sometimes with different names: // From MemoryMapping pub fn write_slice(&self, buf: &[u8], offset: usize) -> Result; pub fn read_obj(&self, offset: usize) -> Result; // From GuestMemory pub fn write_slice_at_addr(&self, buf: &[u8], guest_addr: GuestAddress) -> Result; pub fn read_obj_from_addr(&self, guest_addr: GuestAddress) -> Result; and should be a separate trait. For example if we call it Bytes, MemoryMapping would implement Bytes for MemoryMapping and GuestMemory would implement Bytes: // O for offset pub trait Bytes { type Error; fn read_to_memory(&self, offset: O, src: &mut F, count: usize) -> Result<(), Self::Error> where F: Read; fn read_obj(&self, offset: O) -> Result; ... fn read_slice(&self, buf: &[u8], mem_offset: O) -> Result; .. } endian.rs should be part of this crate too, so that you can write let x: LE = mem.read_obj(ofs); AddressSpace is also too specialized and I would leave it out completely from the time being, while GuestMemory and MemoryMapping could be provided in a separate crate ("rust-vmm-examples"?) as a reference implementation of the traits. No objections from me of course on other parts of the crate, for example VolatileMemory or DataInit. Thanks, From pbonzini at redhat.com Mon Feb 11 11:15:08 2019 From: pbonzini at redhat.com (Paolo Bonzini) Date: Mon, 11 Feb 2019 12:15:08 +0100 Subject: [Rust-VMM] RFC: In-Reply-To: <820678482317472be656cb475b8b1a1eecf5a2ee.camel@intel.com> References: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> <1549627102818.84003@amazon.com> <820678482317472be656cb475b8b1a1eecf5a2ee.camel@intel.com> Message-ID: On 08/02/19 20:27, Boeuf, Sebastien wrote: > I agree Andreea, and I would also suggest that once we agreed on the > creation of a crate, and once we agreed on the global code coming from > the private repo, we still need to create a clean repo under rust-vmm > and start from scratch with proper PRs that can be reviewed. > > Starting with already something in the repo sounds wrong to me. The most > intensive code reviews should really happen on the rust-vmm repos. > > Does that make sense? Yes, however there is a chicken-and-egg problem for the very first commit, where you would have a review for a repository that doesn't exist yet. I don't really have a solution for that (I'm an old-fashioned fan of email-based workflows :)), but using an issue on rust-vmm/community is at least a workaround; reviews can still be performed on the commits in the personal repository. In the meanwhile, should rust-vmm/memory-model be deleted? Paolo From fandree at amazon.com Mon Feb 11 12:38:57 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Mon, 11 Feb 2019 12:38:57 +0000 Subject: [Rust-VMM] RFC v2: propose to host memory-model crate under the rust-vm project In-Reply-To: <383db9b4-9113-9ed3-5d53-b68d877b06bc@redhat.com> References: <6E8B2E46-0D98-454B-89C0-B3EF99F8098B@linux.alibaba.com>, <383db9b4-9113-9ed3-5d53-b68d877b06bc@redhat.com> Message-ID: <1549888736531.45338@amazon.com> Let's move this discussion to GitHub for easy tracking: https://github.com/rust-vmm/community/issues/16 Paolo, can you paste what you said on the issue? Andreea ________________________________________ From: Paolo Bonzini Sent: Monday, February 11, 2019 1:05 PM To: Liu, Jiang; rust-vmm ML; Florescu, Andreea; Samuel Ortiz; Boeuf, Sebastien; Iordache, Alexandra; Dylan Reid; Dr. David Alan Gilbert; zachr at google.com Subject: Re: [Rust-VMM] RFC v2: propose to host memory-model crate under the rust-vm project On 10/02/19 07:00, Liu, Jiang wrote: > Hi all, > I have posted the first version of memory-model crate under > rust-vmm/memory-model, which breaks the repository inclusion process of > the rust-vmm community. So I have created a personal GitHub repository > (https://github.com/jiangliu/memory-model) for v2 and the > rust-vmm/memory-model repository will be deleted soon. Sorry for the > inconvenience! A memory model crate is certainly a very good starting point for rust-vmm, but it shouldn't include the implementation of the backend. Instead, for rust-vmm we should focus on defining common traits that can be used by any VMM. In this light, GuestMemory is composed of two parts: - a way to convert a GuestAddress to a MemoryMapping and an offset, which can be a very simple MemoryMap trait: pub trait MemoryMap { fn do_in_region(&self, guest_addr: GuestAddress, size: usize, cb: F) -> Result where F: FnOnce(&MemoryMapping, usize) -> Result; fn do_in_region_partial(&self, guest_addr: GuestAddress, cb: F) -> Result where F: FnOnce(&MemoryMapping, usize) -> Result; } This can be implemented with linear lookup as is currently the case in firecracker, or it could use a binary search or a radix tree. rust-vmm shouldn't care. - the convenience API to access memory as slices/streams/objects. This part of the API is shared by MemoryMapping and GuestMemory: // From MemoryMapping pub fn read_to_memory(&self, mem_offset: usize, src: &mut F, count: usize) -> Result<()> where F: Read; // From GuestMemory pub fn read_to_memory(&self, guest_addr: GuestAddress, src: &mut F, count: usize) -> Result<()> where F: Read; sometimes with different names: // From MemoryMapping pub fn write_slice(&self, buf: &[u8], offset: usize) -> Result; pub fn read_obj(&self, offset: usize) -> Result; // From GuestMemory pub fn write_slice_at_addr(&self, buf: &[u8], guest_addr: GuestAddress) -> Result; pub fn read_obj_from_addr(&self, guest_addr: GuestAddress) -> Result; and should be a separate trait. For example if we call it Bytes, MemoryMapping would implement Bytes for MemoryMapping and GuestMemory would implement Bytes: // O for offset pub trait Bytes { type Error; fn read_to_memory(&self, offset: O, src: &mut F, count: usize) -> Result<(), Self::Error> where F: Read; fn read_obj(&self, offset: O) -> Result; ... fn read_slice(&self, buf: &[u8], mem_offset: O) -> Result; .. } endian.rs should be part of this crate too, so that you can write let x: LE = mem.read_obj(ofs); AddressSpace is also too specialized and I would leave it out completely from the time being, while GuestMemory and MemoryMapping could be provided in a separate crate ("rust-vmm-examples"?) as a reference implementation of the traits. No objections from me of course on other parts of the crate, for example VolatileMemory or DataInit. Thanks, Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. From samuel.ortiz at intel.com Mon Feb 11 13:19:49 2019 From: samuel.ortiz at intel.com (Samuel Ortiz) Date: Mon, 11 Feb 2019 14:19:49 +0100 Subject: [Rust-VMM] RFC: In-Reply-To: References: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> <1549627102818.84003@amazon.com> <820678482317472be656cb475b8b1a1eecf5a2ee.camel@intel.com> Message-ID: <20190211131949.GC4604@caravaggio> On Mon, Feb 11, 2019 at 12:15:08PM +0100, Paolo Bonzini wrote: > On 08/02/19 20:27, Boeuf, Sebastien wrote: > > I agree Andreea, and I would also suggest that once we agreed on the > > creation of a crate, and once we agreed on the global code coming from > > the private repo, we still need to create a clean repo under rust-vmm > > and start from scratch with proper PRs that can be reviewed. > > > > Starting with already something in the repo sounds wrong to me. The most > > intensive code reviews should really happen on the rust-vmm repos. > > > > Does that make sense? > > Yes, however there is a chicken-and-egg problem for the very first > commit, where you would have a review for a repository that doesn't > exist yet. I don't really have a solution for that (I'm an > old-fashioned fan of email-based workflows :)), but using an issue on > rust-vmm/community is at least a workaround; reviews can still be > performed on the commits in the personal repository. Maybe we could define a very light process where someone willing to add a new crate for rust-vmm should first send an issue describing the crate and the reason why it should be part of rust-vmm. If it makes sense for the community to have this crate being part of rust-vmm, then a completely empty repository is created and people can start sending actual PRs against it. I'm going to send a proposal for this, as a README addition to the community repo. > In the meanwhile, should rust-vmm/memory-model be deleted? It's deleted now. Cheers, Samuel. --------------------------------------------------------------------- Intel Corporation SAS (French simplified joint stock company) Registered headquarters: "Les Montalets"- 2, rue de Paris, 92196 Meudon Cedex, France Registration Number: 302 456 199 R.C.S. NANTERRE Capital: 4,572,000 Euros This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From zachr at google.com Mon Feb 11 18:32:29 2019 From: zachr at google.com (Zach Reizner) Date: Mon, 11 Feb 2019 10:32:29 -0800 Subject: [Rust-VMM] RFC: In-Reply-To: References: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> Message-ID: On Sat, Feb 9, 2019 at 9:11 PM Liu Jiang wrote: > On Feb 9, 2019, at 2:10 AM, Zach Reizner wrote: > > > On Fri, Feb 8, 2019 at 2:18 AM Liu Jiang wrote: > >> Hi all, >> As we have discussed during the meeting, I have created a memory-model >> repository under rust-vmm project and posted the initial version at >> https://github.com/rust-vmm/memory-model . >> The initial version tries to merge current code from the upstream crosvm >> and firecracker projects. And the most sensitive user visible change is >> changing from u64 to usize for memory related data fields. >> > On 64-bit arm devices, we usually run a 32-bit userspace with a 64-bit > kernel. In this case, the machine word size (usize) that crosvm is compiled > with (32-bit) isn't the same as the one the guest kernel, host kernel, > hardware is using (64-bit). We used u64 to ensure that the size was always > at least as big as needed. > > Hi Zach, > Good point. So seems that the AddressSpace abstraction may help to solve > this conflict. > 1) The AddressSpace represents virtual machine physical address space, > which contains memory and MMIO regions. For simplicity, u64 will be used > here for both 32-bits and 64-bits virtual machines. And GuestAddress should > be u64 too. > 2) The GuestMemory represents partial or full mapping of an AddressSpace > into current process, so usize should be used here for memory related > fields because they are used to save pointer/size in current process. And > MemoryMapping should be usize too. > What’s your thoughts? > That seems like a good solution. As long as GuestAddress can be used with GuestMemory methods automatically, independent of the compiled word size, then this will be suitable. > Thanks, > Gerry > > So please help to comment on whether this is the right way to go, and next >> step plan is: >> 1) import endian.rs from crosvm >> 2) add address space abstraction for virtual machine >> Thanks, >> Gerry >> _______________________________________________ >> Rust-vmm mailing list >> Rust-vmm at lists.opendev.org >> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbonzini at redhat.com Mon Feb 11 19:01:27 2019 From: pbonzini at redhat.com (Paolo Bonzini) Date: Mon, 11 Feb 2019 20:01:27 +0100 Subject: [Rust-VMM] RFC: In-Reply-To: References: <5218258E-EB7A-4F74-85E0-8F43AF2A9012@linux.alibaba.com> Message-ID: On 11/02/19 19:32, Zach Reizner wrote: > 2) The GuestMemory represents partial or full mapping of an > AddressSpace into current process, so usize should be used here for > memory related fields because they are used to save pointer/size in > current process. And MemoryMapping should be usize too. > What’s your thoughts? > > That seems like a good solution. As long as GuestAddress can be used > with GuestMemory methods automatically, independent of the compiled word > size, then this will be suitable.  Yes, also in the proposal I placed in the rust-vmm/community issue you have guest memory as an "impl Bytes" (which wraps 64-bit offsets), while individual memory regions are "impl Bytes" and could be mmap-ed regions, u8 slices or whatever. Paolo From timo at crowdstrike.com Tue Feb 12 16:19:41 2019 From: timo at crowdstrike.com (Timo Kreuzer) Date: Tue, 12 Feb 2019 16:19:41 +0000 Subject: [Rust-VMM] Insula project and Hyper-v support Message-ID: Hi everyone, I am working on Insula (https://github.com/insula-rs), more specifically on the Windows loader. The Insula project was started as lightweight VM / container environment for KVM/Hyper-V written in Rust. Some of the goals are: * An abstraction layer to provide a platform independent interface for Linux/Windows / KVM/Hyper-V * Support for Linux and Windows guest loaders * Interface for hooks/callbacks: * security related host-side hooks for e.g. the detection of ring 0 exploits & security bypasses from the host. * optional generalized guest<->host communication interface, e.g. for a kernel-debugger transport layer or security software notifications * optional host-side debug hooks to provide the ability to debug the VM at a low level Considering that firecracker already provide a lot of functionality, we have been looking into consolidating the code base, potentially joining a collaboration with the existing projects. Since one of our goals is host platform independence, we would need Windows / WHV specific implementations / wrappers. Looking at the firecracker code, I found that kvm specific code is used here and there, but it doesn't seem too bad and could probably be modified to separate the kvm specific code from generic code. This is a first RFC to see what you guys think about these things, whether you would be interested in the mentioned pieces and any feedback you might have on that topic. Thanks, Timo -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Thu Feb 14 17:10:15 2019 From: claire at openstack.org (Claire Massey) Date: Thu, 14 Feb 2019 11:10:15 -0600 Subject: [Rust-VMM] PTG & Open Infrastructure Summit in Denver, April 29 - May 4 Message-ID: Hi everyone, Per our discussion on last week’s call, I went ahead and earmarked space for the Rust-VMM community to come collaborate face-to-face in the open during the Project Teams Gathering (PTG) event that’s co-located with the Open Infrastructure Summit in Denver, April 29 - May 4. The main conference portion of the Summit takes place Mon-Weds and the PTG developer working sessions run Thurs-Sat. I’ve requested a dedicated work room for Rust-VMM to have for two full days on Thurs/Fri. This is a great opportunity to get some focused hack time together in person and also to collaborate with adjacent open source communities. There’s also a lot of great content happening at the beginning of the week during the Summit - likely including a presentation or two about Rust-VMM. In the coming weeks we’ll need to confirm if enough people are able to attend the PTG to make use of the space. I really hope we can make this work! The Kata Containers community will already be there in full force and we’d love to get the rest of you from Firecracker, Crosvm, QEMU and others to join us there. I’ll follow up about it on the next call. Let me know if you have any questions. Thanks, Claire From sebastien.boeuf at intel.com Fri Feb 15 17:33:17 2019 From: sebastien.boeuf at intel.com (Boeuf, Sebastien) Date: Fri, 15 Feb 2019 17:33:17 +0000 Subject: [Rust-VMM] PTG & Open Infrastructure Summit in Denver, April 29 - May 4 In-Reply-To: References: Message-ID: <1E91073893EF8F498411079ED374F91245F7A227@ORSMSX115.amr.corp.intel.com> Hi Claire, Maybe we could create a document (what about an etherpad :)) to list the persons who want to attend the PTG? Thanks, Sebastien ________________________________________ From: Claire Massey [claire at openstack.org] Sent: Thursday, February 14, 2019 9:10 AM To: rust-vmm at lists.opendev.org Subject: [Rust-VMM] PTG & Open Infrastructure Summit in Denver, April 29 - May 4 Hi everyone, Per our discussion on last week’s call, I went ahead and earmarked space for the Rust-VMM community to come collaborate face-to-face in the open during the Project Teams Gathering (PTG) event that’s co-located with the Open Infrastructure Summit in Denver, April 29 - May 4. The main conference portion of the Summit takes place Mon-Weds and the PTG developer working sessions run Thurs-Sat. I’ve requested a dedicated work room for Rust-VMM to have for two full days on Thurs/Fri. This is a great opportunity to get some focused hack time together in person and also to collaborate with adjacent open source communities. There’s also a lot of great content happening at the beginning of the week during the Summit - likely including a presentation or two about Rust-VMM. In the coming weeks we’ll need to confirm if enough people are able to attend the PTG to make use of the space. I really hope we can make this work! The Kata Containers community will already be there in full force and we’d love to get the rest of you from Firecracker, Crosvm, QEMU and others to join us there. I’ll follow up about it on the next call. Let me know if you have any questions. Thanks, Claire _______________________________________________ Rust-vmm mailing list Rust-vmm at lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm From pbonzini at redhat.com Fri Feb 15 17:41:40 2019 From: pbonzini at redhat.com (Paolo Bonzini) Date: Fri, 15 Feb 2019 18:41:40 +0100 Subject: [Rust-VMM] PTG & Open Infrastructure Summit in Denver, April 29 - May 4 In-Reply-To: References: Message-ID: <088a2229-5bfd-7c65-8a8b-a7b3bcbd079f@redhat.com> On 14/02/19 18:10, Claire Massey wrote: > Hi everyone, > > Per our discussion on last week’s call, I went ahead and earmarked > space for the Rust-VMM community to come collaborate face-to-face in > the open during the Project Teams Gathering (PTG) event that’s > co-located with the Open Infrastructure Summit in Denver, April 29 - > May 4. > > The main conference portion of the Summit takes place Mon-Weds and > the PTG developer working sessions run Thurs-Sat. I’ve requested a > dedicated work room for Rust-VMM to have for two full days on > Thurs/Fri. This is a great opportunity to get some focused hack time > together in person and also to collaborate with adjacent open source > communities. There’s also a lot of great content happening at the > beginning of the week during the Summit - likely including a > presentation or two about Rust-VMM. > > In the coming weeks we’ll need to confirm if enough people are able > to attend the PTG to make use of the space. I really hope we can make > this work! The Kata Containers community will already be there in > full force and we’d love to get the rest of you from Firecracker, > Crosvm, QEMU and others to join us there. There should be 3-4 people from Red Hat. Paolo From claire at openstack.org Fri Feb 15 17:48:33 2019 From: claire at openstack.org (Claire Massey) Date: Fri, 15 Feb 2019 11:48:33 -0600 Subject: [Rust-VMM] PTG & Open Infrastructure Summit in Denver, April 29 - May 4 In-Reply-To: <088a2229-5bfd-7c65-8a8b-a7b3bcbd079f@redhat.com> References: <088a2229-5bfd-7c65-8a8b-a7b3bcbd079f@redhat.com> Message-ID: <748B49F1-3C34-499B-9B3D-4F3A448295E8@openstack.org> Great idea, Sebastien! I’ve started this planning pad to capture the list of people who are planning to attend and to start brainstorming topics: https://etherpad.openstack.org/p/rust-vmm-2019-ptg-denver Paola, that’s great news. Please add your team to the list. Thanks, Claire > On Feb 15, 2019, at 11:41 AM, Paolo Bonzini wrote: > > On 14/02/19 18:10, Claire Massey wrote: >> Hi everyone, >> >> Per our discussion on last week’s call, I went ahead and earmarked >> space for the Rust-VMM community to come collaborate face-to-face in >> the open during the Project Teams Gathering (PTG) event that’s >> co-located with the Open Infrastructure Summit in Denver, April 29 - >> May 4. >> >> The main conference portion of the Summit takes place Mon-Weds and >> the PTG developer working sessions run Thurs-Sat. I’ve requested a >> dedicated work room for Rust-VMM to have for two full days on >> Thurs/Fri. This is a great opportunity to get some focused hack time >> together in person and also to collaborate with adjacent open source >> communities. There’s also a lot of great content happening at the >> beginning of the week during the Summit - likely including a >> presentation or two about Rust-VMM. >> >> In the coming weeks we’ll need to confirm if enough people are able >> to attend the PTG to make use of the space. I really hope we can make >> this work! The Kata Containers community will already be there in >> full force and we’d love to get the rest of you from Firecracker, >> Crosvm, QEMU and others to join us there. > > There should be 3-4 people from Red Hat. > > Paolo > > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at openstack.org Wed Feb 20 20:25:56 2019 From: jonathan at openstack.org (Jonathan Bryce) Date: Wed, 20 Feb 2019 14:25:56 -0600 Subject: [Rust-VMM] Proposal for contribution and crate approval process Message-ID: <95B66680-2B2E-4A51-A86F-4F9904838F14@openstack.org> Hi everyone, On the rust-vmm community meeting this morning there was a discussion about the approval process for new crates. From the discussion a basic proposal emerged: - Create a group of rust-vmm project-wide maintainers - Group size would start out with around 5 individuals - Maintainers should come from a variety of backgrounds and affiliations - Inclusion of a new crate would require approval from at least 3 maintainers - Maintainers should look for approval and feedback from multiple “consumer" communities (e.g. qemu, crosvm, kata, firecracker) - As the number of crates scale, maintenance at the crate level would be distributed beyond the project-wide group to avoid overloading the project-wide maintainers or creating bottlenecks within individual crates I offered to write this up and post on the list to make sure everyone had a chance to see and comment on it, so please send your thoughts/feedback. If this is agreeable as a process, we’ll need to bootstrap the initial set of maintainers. I have thoughts on that as well, but would love to hear others’ opinions too. Thanks, Jonathan From sebastien.boeuf at intel.com Mon Feb 25 18:53:18 2019 From: sebastien.boeuf at intel.com (Boeuf, Sebastien) Date: Mon, 25 Feb 2019 18:53:18 +0000 Subject: [Rust-VMM] [Rust-vmm] Goals for this list In-Reply-To: References: <1545089457255.82861@amazon.com> <6acbefa5-dac8-e527-3be7-00df9586b645@lohutok.net> <86f95429-4381-9501-798d-ad56f264a34c@redhat.com>, Message-ID: <1E91073893EF8F498411079ED374F91245F8483D@ORSMSX115.amr.corp.intel.com> Reviving this thread! As Miriam mentioned, there's some ongoing work to support IOAPIC, PIC and PIT emulation being performed in userspace (https://bugs.chromium.org/p/chromium/issues/detail?id=908689), which is equivalent to irqchip=split. This is really great to see this happening, but I would like to go even one step further, and be able to support the equivalent of irqchip=off. This use case means that KVM is not performing any emulation and that everything is left off to the userspace process. This would allow for running legacy free hypervisor, where IRQs would be always supported through MSI/MSI-X, hence using only the LAPIC. For this, we would need full LAPIC emulation to be designed in userspace, with no need for any IOAPIC/PIC/PIT. The current blocker is the fact that MSI is tightly coupled with PCI, and there is no current upstream way to retrieve the MSI vectors associated with a device. But if we can find some mechanisms to communicate the MSI vectors chosen by the guest kernel down to the hypervisor about a device, we could definitely get rid of IOAPIC, hence reaching the end goal I'm talking about here. Just note that ACPI would be a good way for the guest to communicate those information with the VMM. What do you all think about this? Is there anything I missed that makes this proposal not feasible? Thanks, Sebastien ________________________________ From: Dylan Reid [dgreid at google.com] Sent: Thursday, December 20, 2018 11:55 AM To: Paolo Bonzini; Miriam Zimmerman Cc: rust-vmm at lists.opendev.org Subject: Re: [Rust-VMM] [Rust-vmm] Goals for this list On Thu, Dec 20, 2018, 7:34 AM Paolo Bonzini wrote: On 20/12/18 16:05, Anthony Liguori wrote: > The two biggest sources of CVEs in KVM have been instruction emulation > and device emulation. Moving the x86_emulate code to userspace and > rewritting it in Rust would eliminate one of the larger attack surfaces > in KVM and likewise, moving IO APIC and PIT emulation to userspace would > help a lot there too. > > On modern processors, LAPIC is handled almost entirely in hardware so > the remaining complexity in KVM is really around EPT handling and > hardware interaction. I don't think either can reasonably be moved. Note that userspace PIT/PIC/IOAPIC emulation is already supported by KVM (Linux 4.4 or newer I think; QEMU will make it the default for the q35 machine type in the next release, for now you need -machine kernel_irqchip=split). + Miriam who is working on pit and apic on crosvm Paolo _______________________________________________ Rust-vmm mailing list Rust-vmm at lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm -------------- next part -------------- An HTML attachment was scrubbed... URL: From mutexlox at google.com Mon Feb 25 23:32:35 2019 From: mutexlox at google.com (Miriam Zimmerman) Date: Mon, 25 Feb 2019 15:32:35 -0800 Subject: [Rust-VMM] [Rust-vmm] Goals for this list In-Reply-To: <1E91073893EF8F498411079ED374F91245F8483D@ORSMSX115.amr.corp.intel.com> References: <1545089457255.82861@amazon.com> <6acbefa5-dac8-e527-3be7-00df9586b645@lohutok.net> <86f95429-4381-9501-798d-ad56f264a34c@redhat.com> <1E91073893EF8F498411079ED374F91245F8483D@ORSMSX115.amr.corp.intel.com> Message-ID: On Mon, Feb 25, 2019 at 10:53 AM Boeuf, Sebastien wrote: > > Reviving this thread! > > As Miriam mentioned, there's some ongoing work to support IOAPIC, PIC and PIT emulation being performed in userspace (https://bugs.chromium.org/p/chromium/issues/detail?id=908689), which is equivalent to irqchip=split. This is really great to see this happening, but I would like to go even one step further, and be able to support the equivalent of irqchip=off. Thanks for your interest! It's great to know that the work I'm doing will be more broadly useful! :-) > This use case means that KVM is not performing any emulation and that everything is left off to the userspace process. This would allow for running legacy free hypervisor, where IRQs would be always supported through MSI/MSI-X, hence using only the LAPIC. For this, we would need full LAPIC emulation to be designed in userspace, with no need for any IOAPIC/PIC/PIT. I believe that the Google Cloud folks tried using a userspace LAPIC, but when they benchmarked, the performance was unacceptably bad. (since LAPICs are used much more frequently than IOAPICs/PICs/PITs) I can't watch it right now to confirm, but I believe the "Performant Security Hardening of KVM" from KVM forum 2016 (http://www.linux-kvm.org/page/KVM_Forum_2016) goes into some more detail on this. > The current blocker is the fact that MSI is tightly coupled with PCI, and there is no current upstream way to retrieve the MSI vectors associated with a device. But if we can find some mechanisms to communicate the MSI vectors chosen by the guest kernel down to the hypervisor about a device, we could definitely get rid of IOAPIC, hence reaching the end goal I'm talking about here. Just note that ACPI would be a good way for the guest to communicate those information with the VMM. > > What do you all think about this? Is there anything I missed that makes this proposal not feasible? > > Thanks, > Sebastien Miriam From sebastien.boeuf at intel.com Tue Feb 26 00:48:49 2019 From: sebastien.boeuf at intel.com (Boeuf, Sebastien) Date: Tue, 26 Feb 2019 00:48:49 +0000 Subject: [Rust-VMM] [Rust-vmm] Goals for this list In-Reply-To: References: <1545089457255.82861@amazon.com> <6acbefa5-dac8-e527-3be7-00df9586b645@lohutok.net> <86f95429-4381-9501-798d-ad56f264a34c@redhat.com> <1E91073893EF8F498411079ED374F91245F8483D@ORSMSX115.amr.corp.intel.com> Message-ID: On Mon, 2019-02-25 at 15:32 -0800, Miriam Zimmerman wrote: > On Mon, Feb 25, 2019 at 10:53 AM Boeuf, Sebastien > wrote: > > > > Reviving this thread! > > > > As Miriam mentioned, there's some ongoing work to support IOAPIC, > > PIC and PIT emulation being performed in userspace ( > > https://bugs.chromium.org/p/chromium/issues/detail?id=908689), > > which is equivalent to irqchip=split. This is really great to see > > this happening, but I would like to go even one step further, and > > be able to support the equivalent of irqchip=off. > > Thanks for your interest! It's great to know that the work I'm doing > will be more broadly useful! :-) > > > This use case means that KVM is not performing any emulation and > > that everything is left off to the userspace process. This would > > allow for running legacy free hypervisor, where IRQs would be > > always supported through MSI/MSI-X, hence using only the LAPIC. For > > this, we would need full LAPIC emulation to be designed in > > userspace, with no need for any IOAPIC/PIC/PIT. > > I believe that the Google Cloud folks tried using a userspace LAPIC, > but when they benchmarked, the performance was unacceptably bad. > (since LAPICs are used much more frequently than IOAPICs/PICs/PITs) > > I can't watch it right now to confirm, but I believe the "Performant > Security Hardening of KVM" from KVM forum 2016 > (http://www.linux-kvm.org/page/KVM_Forum_2016) goes into some more > detail on this. Yes, Steve's presentation summarizes it well. Gerry, I think you mentioned some work you did using NetBSD and uKVM regarding the LAPIC emulation being done in userspace, right? How were the performances? Do you have some pointers to the code? Thanks, Sebastien > > > The current blocker is the fact that MSI is tightly coupled with > > PCI, and there is no current upstream way to retrieve the MSI > > vectors associated with a device. But if we can find some > > mechanisms to communicate the MSI vectors chosen by the guest > > kernel down to the hypervisor about a device, we could definitely > > get rid of IOAPIC, hence reaching the end goal I'm talking about > > here. Just note that ACPI would be a good way for the guest to > > communicate those information with the VMM. > > > > What do you all think about this? Is there anything I missed that > > makes this proposal not feasible? > > > > Thanks, > > Sebastien > > Miriam From dgilbert at redhat.com Tue Feb 26 09:09:48 2019 From: dgilbert at redhat.com (Dr. David Alan Gilbert) Date: Tue, 26 Feb 2019 09:09:48 +0000 Subject: [Rust-VMM] [Rust-vmm] Goals for this list In-Reply-To: <1E91073893EF8F498411079ED374F91245F8483D@ORSMSX115.amr.corp.intel.com> References: <1545089457255.82861@amazon.com> <6acbefa5-dac8-e527-3be7-00df9586b645@lohutok.net> <86f95429-4381-9501-798d-ad56f264a34c@redhat.com> <1E91073893EF8F498411079ED374F91245F8483D@ORSMSX115.amr.corp.intel.com> Message-ID: <20190226090947.GA2721@work-vm> * Boeuf, Sebastien (sebastien.boeuf at intel.com) wrote: > Reviving this thread! > > As Miriam mentioned, there's some ongoing work to support IOAPIC, PIC and PIT emulation being performed in userspace (https://bugs.chromium.org/p/chromium/issues/detail?id=908689), which is equivalent to irqchip=split. This is really great to see this happening, but I would like to go even one step further, and be able to support the equivalent of irqchip=off. > > This use case means that KVM is not performing any emulation and that everything is left off to the userspace process. This would allow for running legacy free hypervisor, where IRQs would be always supported through MSI/MSI-X, hence using only the LAPIC. For this, we would need full LAPIC emulation to be designed in userspace, with no need for any IOAPIC/PIC/PIT. > > The current blocker is the fact that MSI is tightly coupled with PCI, and there is no current upstream way to retrieve the MSI vectors associated with a device. But if we can find some mechanisms to communicate the MSI vectors chosen by the guest kernel down to the hypervisor about a device, we could definitely get rid of IOAPIC, hence reaching the end goal I'm talking about here. Just note that ACPI would be a good way for the guest to communicate those information with the VMM. Would a new ACPI mechanism really be any easy than some really basic PCI? You don't really need to provide a true PCI hierarchy or anything. Dave > What do you all think about this? Is there anything I missed that makes this proposal not feasible? > > Thanks, > Sebastien > ________________________________ > From: Dylan Reid [dgreid at google.com] > Sent: Thursday, December 20, 2018 11:55 AM > To: Paolo Bonzini; Miriam Zimmerman > Cc: rust-vmm at lists.opendev.org > Subject: Re: [Rust-VMM] [Rust-vmm] Goals for this list > > > > On Thu, Dec 20, 2018, 7:34 AM Paolo Bonzini wrote: > On 20/12/18 16:05, Anthony Liguori wrote: > > The two biggest sources of CVEs in KVM have been instruction emulation > > and device emulation. Moving the x86_emulate code to userspace and > > rewritting it in Rust would eliminate one of the larger attack surfaces > > in KVM and likewise, moving IO APIC and PIT emulation to userspace would > > help a lot there too. > > > > On modern processors, LAPIC is handled almost entirely in hardware so > > the remaining complexity in KVM is really around EPT handling and > > hardware interaction. I don't think either can reasonably be moved. > > Note that userspace PIT/PIC/IOAPIC emulation is already supported by KVM > (Linux 4.4 or newer I think; QEMU will make it the default for the q35 > machine type in the next release, for now you need -machine > kernel_irqchip=split). > > + Miriam who is working on pit and apic on crosvm > > > Paolo > > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm -- Dr. David Alan Gilbert / dgilbert at redhat.com / Manchester, UK From pbonzini at redhat.com Tue Feb 26 11:38:27 2019 From: pbonzini at redhat.com (Paolo Bonzini) Date: Tue, 26 Feb 2019 12:38:27 +0100 Subject: [Rust-VMM] [Rust-vmm] Goals for this list In-Reply-To: <20190226090947.GA2721@work-vm> References: <1545089457255.82861@amazon.com> <6acbefa5-dac8-e527-3be7-00df9586b645@lohutok.net> <86f95429-4381-9501-798d-ad56f264a34c@redhat.com> <1E91073893EF8F498411079ED374F91245F8483D@ORSMSX115.amr.corp.intel.com> <20190226090947.GA2721@work-vm> Message-ID: <7b423e51-b5a2-0911-40ae-133460b8a45c@redhat.com> On 26/02/19 10:09, Dr. David Alan Gilbert wrote: >> But if we can find some mechanisms to communicate the MSI vectors >> chosen by the guest kernel down to the hypervisor about a device, >> we could definitely get rid of IOAPIC, hence reaching the end goal >> I'm talking about here. Just note that ACPI would be a good way for >> the guest to communicate those information with the VMM. > > Would a new ACPI mechanism really be any easy than some really basic > PCI? You don't really need to provide a true PCI hierarchy or > anything. Yeah, I don't see the problem with configuring MSI via the PCI configuration space. Being able to use MSI is just one more reason to drop virtio-mmio and switch to virtio-pci. (Without IOAPIC, the PCI devices would not be able to serve INTX interrupts, but that's not a problem for most modern devices). Paolo From sebastien.boeuf at intel.com Tue Feb 26 14:39:35 2019 From: sebastien.boeuf at intel.com (Boeuf, Sebastien) Date: Tue, 26 Feb 2019 14:39:35 +0000 Subject: [Rust-VMM] [Rust-vmm] Goals for this list In-Reply-To: <7b423e51-b5a2-0911-40ae-133460b8a45c@redhat.com> References: <1545089457255.82861@amazon.com> <6acbefa5-dac8-e527-3be7-00df9586b645@lohutok.net> <86f95429-4381-9501-798d-ad56f264a34c@redhat.com> <1E91073893EF8F498411079ED374F91245F8483D@ORSMSX115.amr.corp.intel.com> <20190226090947.GA2721@work-vm> <7b423e51-b5a2-0911-40ae-133460b8a45c@redhat.com> Message-ID: <39e794b8d78bf5736991520cbcfdd04617e325df.camel@intel.com> On Tue, 2019-02-26 at 12:38 +0100, Paolo Bonzini wrote: > On 26/02/19 10:09, Dr. David Alan Gilbert wrote: > > > But if we can find some mechanisms to communicate the MSI vectors > > > chosen by the guest kernel down to the hypervisor about a device, > > > we could definitely get rid of IOAPIC, hence reaching the end > > > goal > > > I'm talking about here. Just note that ACPI would be a good way > > > for > > > the guest to communicate those information with the VMM. > > > > Would a new ACPI mechanism really be any easy than some really > > basic > > PCI? You don't really need to provide a true PCI hierarchy or > > anything. > > Yeah, I don't see the problem with configuring MSI via the PCI > configuration space. Being able to use MSI is just one more reason > to > drop virtio-mmio and switch to virtio-pci. Yes you're right, and if you go all the way with PCI, you solve a lot of issues. But let's say you don't have PCI support, you need a way to notify your guest about things like hotplug, and you do so using a GED or GPE ACPI device. In that case, we still want to get rid of the IOAPIC dependency and make sure those ACPI devices would support MSI. > > (Without IOAPIC, the PCI devices would not be able to serve INTX > interrupts, but that's not a problem for most modern devices). Yes but that's not a problem if you consider using only modern devices. > > Paolo From chao.p.peng at intel.com Wed Feb 27 12:49:56 2019 From: chao.p.peng at intel.com (Peng, Chao P) Date: Wed, 27 Feb 2019 12:49:56 +0000 Subject: [Rust-VMM] Proposal for contribution and crate approval process In-Reply-To: <95B66680-2B2E-4A51-A86F-4F9904838F14@openstack.org> References: <95B66680-2B2E-4A51-A86F-4F9904838F14@openstack.org> Message-ID: Sounds like a good process to start with. Meanwhile my feeling is that we need a high-level project-wide design first. The reason is that the creates are not really standalone projects. They are closely related and most likely they will form up a single binary from user' point of view. They should be well designed from the high-level at the very beginning. To me, we'd better design the project from the user point of view: it's a single project as a whole. The division of the codes into creates is OK, but that's just kind of internal thing for the project. Also every designed creates should meet some release criteria otherwise a failure in one creates may result the failure of the whole project. There are several things we can start with: - Discuss #14(https://github.com/rust-vmm/community/issues/14) to come up a 'must have list' that will be included in the first release. - Come up a project-wide high-level design. For example: how to divide the creates how to abstract the common code how creates interact each other and come up possible traits definition what interfaces we want to expose to users... - Discuss how code/doc/test will be organized - Define process/work flow/governance... But even before those, we need understand and write down our requirement clearly so everybody involved will be on the same page. Just my thoughts. Welcome for further discussion. Thanks, Chao > -----Original Message----- > From: Jonathan Bryce [mailto:jonathan at openstack.org] > Sent: Thursday, February 21, 2019 4:26 AM > To: rust-vmm at lists.opendev.org > Subject: [Rust-VMM] Proposal for contribution and crate approval process > > Hi everyone, > > On the rust-vmm community meeting this morning there was a discussion about the approval process for new crates. From the > discussion a basic proposal emerged: > > - Create a group of rust-vmm project-wide maintainers > - Group size would start out with around 5 individuals > - Maintainers should come from a variety of backgrounds and affiliations > - Inclusion of a new crate would require approval from at least 3 maintainers > - Maintainers should look for approval and feedback from multiple “consumer" communities (e.g. qemu, crosvm, kata, > firecracker) > - As the number of crates scale, maintenance at the crate level would be distributed beyond the project-wide group to avoid > overloading the project-wide maintainers or creating bottlenecks within individual crates > > I offered to write this up and post on the list to make sure everyone had a chance to see and comment on it, so please send your > thoughts/feedback. > > If this is agreeable as a process, we’ll need to bootstrap the initial set of maintainers. I have thoughts on that as well, but would love to > hear others’ opinions too. > > Thanks, > > Jonathan > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm From samuel.ortiz at intel.com Wed Feb 27 13:25:33 2019 From: samuel.ortiz at intel.com (Samuel Ortiz) Date: Wed, 27 Feb 2019 14:25:33 +0100 Subject: [Rust-VMM] Proposal for contribution and crate approval process In-Reply-To: References: <95B66680-2B2E-4A51-A86F-4F9904838F14@openstack.org> Message-ID: <20190227132533.GQ21421@caravaggio> Hi Chao, On Wed, Feb 27, 2019 at 12:49:56PM +0000, Peng, Chao P wrote: > Sounds like a good process to start with. > > Meanwhile my feeling is that we need a high-level project-wide design first. The reason is that the creates are not really standalone projects. They are closely related and most likely they will form up a single binary from user' point of view. They should be well designed from the high-level at the very beginning. > > To me, we'd better design the project from the user point of view: it's a single project as a whole. The division of the codes into creates is OK, but that's just kind of internal thing for the project. > rust-vmm is a single project and we should be consistent in the way we build and design crates that are part of the project. But I think we should be careful about making sure those crates can be used independently, as much as possible. We don't want to produce one single VMM out of the rust-vmm crates, we want rust-vmm users to be able to build custom and configurable VMMs out of them. We may provide a generic VMM as an example on how to use those crates, but for now I don't see us providing a canonical/reference, production ready VMM directly from rust-vmm. And even if we do, this should not be the drive for the crates, but only a good and performant example for the rust-vmm crates usage. > Also every designed creates should meet some release criteria otherwise a failure in one creates may result the failure of the whole project. > Yes, I think this is what Andreea had in mind when opening issue #14. > There are several things we can start with: > - Discuss #14(https://github.com/rust-vmm/community/issues/14) to come up a 'must have list' that will be included in the first release. > - Come up a project-wide high-level design. For example: > how to divide the creates > how to abstract the common code > how creates interact each other and come up possible traits definition > what interfaces we want to expose to users... > - Discuss how code/doc/test will be organized > - Define process/work flow/governance... > > But even before those, we need understand and write down our requirement clearly so everybody involved will be on the same page. > There are some common requirements, but there won't be a one size fits all set of requirements for a single rust-vmm based VMM. > Just my thoughts. Welcome for further discussion. Thanks for the input. Cheers, Samuel. --------------------------------------------------------------------- Intel Corporation SAS (French simplified joint stock company) Registered headquarters: "Les Montalets"- 2, rue de Paris, 92196 Meudon Cedex, France Registration Number: 302 456 199 R.C.S. NANTERRE Capital: 4,572,000 Euros This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From chao.p.peng at intel.com Thu Feb 28 01:41:38 2019 From: chao.p.peng at intel.com (Peng, Chao P) Date: Thu, 28 Feb 2019 01:41:38 +0000 Subject: [Rust-VMM] Proposal for contribution and crate approval process In-Reply-To: <20190227132533.GQ21421@caravaggio> References: <95B66680-2B2E-4A51-A86F-4F9904838F14@openstack.org> <20190227132533.GQ21421@caravaggio> Message-ID: > rust-vmm is a single project and we should be consistent in the way we build and design crates that are part of the project. > But I think we should be careful about making sure those crates can be used independently, as much as possible. We don't want to > produce one single VMM out of the rust-vmm crates, we want rust-vmm users to be able to build custom and configurable VMMs out > of them. > We may provide a generic VMM as an example on how to use those crates, but for now I don't see us providing a canonical/reference, > production ready VMM directly from rust-vmm. And even if we do, this should not be the drive for the crates, but only a good and > performant example for the rust-vmm crates usage. Understand. Then clearly define the creates boundary and interaction among creates is even more important. This is a sign that a project-level design is urgent. ... > > But even before those, we need understand and write down our requirement clearly so everybody involved will be on the same > page. > > > There are some common requirements, but there won't be a one size fits all set of requirements for a single rust-vmm based VMM. That's fine. The project as a whole should try to solve the supersets which cover almost all these requirements. My point is we need to understand these requirements first, either they are common or specific to certain usage. This is good for: - everybody to have a same language and reduce implicit assumptions as much as possible - correctly designing As an example, let's say, we will have all the device emulation in one crate, one may argue this can't satisfy their requirement since they want to configure their own device combination. Then let's say, on the other hand, we have a design that each device being a standalone crate. Oh, this is crazy, I guess we will have crate pollution and crate dependency issue. So the right design will be likely a balance between the two. Then in which granularity can we build our device crates? I have no answer, may PCI-related device in one crate or something else for example. At this point the clear requirement should help us on getting the answer. Thanks, Chao From dinechin at redhat.com Thu Feb 28 08:51:32 2019 From: dinechin at redhat.com (Christophe de Dinechin) Date: Thu, 28 Feb 2019 09:51:32 +0100 Subject: [Rust-VMM] Proposal for contribution and crate approval process In-Reply-To: References: <95B66680-2B2E-4A51-A86F-4F9904838F14@openstack.org> Message-ID: <02E686D4-6D18-456B-B9ED-9E3F5A6F9DE0@redhat.com> [Resend from different email address, sorry if you receive a duplicate] > On 27 Feb 2019, at 13:49, Peng, Chao P wrote: > > Sounds like a good process to start with. > > Meanwhile my feeling is that we need a high-level project-wide design first. The reason is that the creates are not really standalone projects. They are closely related and most likely they will form up a single binary from user' point of view. They should be well designed from the high-level at the very beginning. I understand your concern, but I also disagree about the “single binary” approach. As other have already said, I think the objective is to make crates that are reusable in a variety of binaries. That’s a good way to identify parts that are reusable, and parts that are “details of the implementation”. As a matter of fact, I think it would be interesting to start thinking about small binaries that we could start working on “right away”, i.e. things that could work using only, say, the memory-model crate. The whole rust-vmm project also builds on a lot of earlier designs, experiments and experience. As a result, core design concepts are already “just there” simply by citation. For example, the memory-model crate currently being defined began as a simple extraction of existing code by Jiang. Paolo then started doing a lot of heavy lifting to isolate generic code and concepts. To me, discussing design changes based on real code proposals such as this is an effective way to move forward. > > To me, we'd better design the project from the user point of view: it's a single project as a whole. The division of the codes into creates is OK, but that's just kind of internal thing for the project. Also every designed creates should meet some release criteria otherwise a failure in one creates may result the failure of the whole project. > > There are several things we can start with: > - Discuss #14(https://github.com/rust-vmm/community/issues/14) to come up a 'must have list' that will be included in the first release. > - Come up a project-wide high-level design. For example: > how to divide the creates > how to abstract the common code > how creates interact each other and come up possible traits definition > what interfaces we want to expose to users... > - Discuss how code/doc/test will be organized > - Define process/work flow/governance… What I wrote above does not imply that it’s not a good time to talk about processes and documentation. In particular, Rust brings a lot of things in terms of testing, we should leverage that. Similarly, Rust has a clean approach to modularity, and writing code that is idiomatic in that respect seems quite important. Do we need more about “how to divide the crates”? I’m not sure we do, and I’m not sure we really can at this stage. As an example, Paolo’s current work takes advantage of generic traits as a mechanism to delineate common interfaces that, to a large extent, match an existing implementation. To me, this looks really promising (as in "forward-looking”). I was reviewing Jiang’s code when Paolo shared his intent to go that way, and as soon as he said it, it seemed “obvious” to me that it was the right choice. Of course, I understand that doing a larger redesign like this may cause more disruption if we want to reintegrate these changes in existing projects. But that’s precisely how we can discuss the trade-offs wrt. “how to abstract the common code”, I believe. So I completely agree that we should document what we are doing, but I think it makes sense to document it while we are doing it, for example by reviewing design changes that are currently being proposed, by trying to build small tools around them to see if the interface is lacking in some respect, etc. In a later reply, you also wrote: > As an example, let's say, we will have all the device emulation in one crate, one may argue this can't satisfy their requirement since they want to configure their own device combination. Then let's say, on the other hand, we have a design that each device being a standalone crate. Oh, this is crazy, I guess we will have crate pollution and crate dependency issue. So the right design will be likely a balance between the two. Then in which granularity can we build our device crates? I have no answer, may PCI-related device in one crate or something else for example. At this point the clear requirement should help us on getting the answer. I completely agree with the example and the related concern. On the other hand, I think this is precisely the kind of problem that we are likely to expose by testing and reviewing code more than by writing specifications ahead of time. > But even before those, we need understand and write down our requirement clearly so everybody involved will be on the same page. > > Just my thoughts. Welcome for further discussion. > > Thanks, > Chao >> -----Original Message----- >> From: Jonathan Bryce [mailto:jonathan at openstack.org] >> Sent: Thursday, February 21, 2019 4:26 AM >> To: rust-vmm at lists.opendev.org >> Subject: [Rust-VMM] Proposal for contribution and crate approval process >> >> Hi everyone, >> >> On the rust-vmm community meeting this morning there was a discussion about the approval process for new crates. From the >> discussion a basic proposal emerged: >> >> - Create a group of rust-vmm project-wide maintainers >> - Group size would start out with around 5 individuals >> - Maintainers should come from a variety of backgrounds and affiliations >> - Inclusion of a new crate would require approval from at least 3 maintainers >> - Maintainers should look for approval and feedback from multiple “consumer" communities (e.g. qemu, crosvm, kata, >> firecracker) >> - As the number of crates scale, maintenance at the crate level would be distributed beyond the project-wide group to avoid >> overloading the project-wide maintainers or creating bottlenecks within individual crates >> >> I offered to write this up and post on the list to make sure everyone had a chance to see and comment on it, so please send your >> thoughts/feedback. >> >> If this is agreeable as a process, we’ll need to bootstrap the initial set of maintainers. I have thoughts on that as well, but would love to >> hear others’ opinions too. >> >> Thanks, >> >> Jonathan >> _______________________________________________ >> Rust-vmm mailing list >> Rust-vmm at lists.opendev.org >> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm From pbonzini at redhat.com Thu Feb 28 09:42:44 2019 From: pbonzini at redhat.com (Paolo Bonzini) Date: Thu, 28 Feb 2019 10:42:44 +0100 Subject: [Rust-VMM] Proposal for contribution and crate approval process In-Reply-To: <02E686D4-6D18-456B-B9ED-9E3F5A6F9DE0@redhat.com> References: <95B66680-2B2E-4A51-A86F-4F9904838F14@openstack.org> <02E686D4-6D18-456B-B9ED-9E3F5A6F9DE0@redhat.com> Message-ID: On 28/02/19 09:51, Christophe de Dinechin wrote: > I understand your concern, but I also disagree about the “single > binary” approach. As other have already said, I think the objective > is to make crates that are reusable in a variety of binaries. That’s > a good way to identify parts that are reusable, and parts that are > “details of the implementation”. As a matter of fact, I think it > would be interesting to start thinking about small binaries that we > could start working on “right away”, i.e. things that could work > using only, say, the memory-model crate. I agree; thinking about "small binaries" and sample users of rust-vmm crates helps deciding which crates to work on next. In fact that's also why it is important to have reference implementations and/or samples at each step: for use in other reference implementations and/or samples! For example, a sample vhost-user-blk implementation is probably the minimal example of what to do with rust-vmm crates. It would use the memory-model crate's mmap implementation + the virtqueue abstractions, so the next obvious steps for rust-vmm are: 1) to port virtqueue code (not necessarily all of virtio) from crosvm to and firecracker rust-vmm's memory-model crate; 2) to write a vhost-user crate and a sample code for that Another possibility could be a userspace IP stack (or a binding to libslirp which we're extracting from QEMU to a standalone library) and a sample vhost-user-net implementation. But in any case virtio comes first, and Jiang is already working on it as far as I understand. In parallel with this, crosvm and firecracker can and should be ported to rust-vmm's memory-model (which actually will be renamed to vm-memory). In turn, this is not an all-or-nothing thing and it can start with just renaming the methods to the names used in rust-vmm. > The whole rust-vmm project also builds on a lot of earlier designs, > experiments and experience. As a result, core design concepts are > already “just there” simply by citation. For example, the > memory-model crate currently being defined began as a simple > extraction of existing code by Jiang. Paolo then started doing a lot > of heavy lifting to isolate generic code and concepts. To me, > discussing design changes based on real code proposals such as this > is an effective way to move forward. Again I agree, and the simple extraction of existing code is already a huge step for those that, like me, have little or no exposure to crosvm and firecracker. If you can look at a small amount of battle-tested code with a fresh mind, the abstractions just come to you naturally. Paolo From chao.p.peng at intel.com Thu Feb 28 11:21:58 2019 From: chao.p.peng at intel.com (Peng, Chao P) Date: Thu, 28 Feb 2019 11:21:58 +0000 Subject: [Rust-VMM] Proposal for contribution and crate approval process In-Reply-To: References: <95B66680-2B2E-4A51-A86F-4F9904838F14@openstack.org> <02E686D4-6D18-456B-B9ED-9E3F5A6F9DE0@redhat.com> Message-ID: Understood your guys' motivation. Thanks for clarification;) Chao > -----Original Message----- > From: Paolo Bonzini [mailto:pbonzini at redhat.com] > Sent: Thursday, February 28, 2019 5:43 PM > To: Christophe de Dinechin ; Peng, Chao P > Cc: Jonathan Bryce ; rust-vmm at lists.opendev.org > Subject: Re: [Rust-VMM] Proposal for contribution and crate approval process > > On 28/02/19 09:51, Christophe de Dinechin wrote: > > I understand your concern, but I also disagree about the “single > > binary” approach. As other have already said, I think the objective is > > to make crates that are reusable in a variety of binaries. That’s a > > good way to identify parts that are reusable, and parts that are > > “details of the implementation”. As a matter of fact, I think it would > > be interesting to start thinking about small binaries that we could > > start working on “right away”, i.e. things that could work using only, > > say, the memory-model crate. > > I agree; thinking about "small binaries" and sample users of rust-vmm crates helps deciding which crates to work on next. In fact that's > also why it is important to have reference implementations and/or samples at each step: for use in other reference implementations > and/or samples! > > For example, a sample vhost-user-blk implementation is probably the minimal example of what to do with rust-vmm crates. It would > use the memory-model crate's mmap implementation + the virtqueue abstractions, so the next obvious steps for rust-vmm are: > > 1) to port virtqueue code (not necessarily all of virtio) from crosvm to and firecracker rust-vmm's memory-model crate; > > 2) to write a vhost-user crate and a sample code for that > > Another possibility could be a userspace IP stack (or a binding to libslirp which we're extracting from QEMU to a standalone library) and > a sample vhost-user-net implementation. But in any case virtio comes first, and Jiang is already working on it as far as I understand. > > In parallel with this, crosvm and firecracker can and should be ported to rust-vmm's memory-model (which actually will be renamed to > vm-memory). In turn, this is not an all-or-nothing thing and it can start with just renaming the methods to the names used in rust- > vmm. > > > The whole rust-vmm project also builds on a lot of earlier designs, > > experiments and experience. As a result, core design concepts are > > already “just there” simply by citation. For example, the memory-model > > crate currently being defined began as a simple extraction of existing > > code by Jiang. Paolo then started doing a lot of heavy lifting to > > isolate generic code and concepts. To me, discussing design changes > > based on real code proposals such as this is an effective way to move > > forward. > > Again I agree, and the simple extraction of existing code is already a huge step for those that, like me, have little or no exposure to > crosvm and firecracker. If you can look at a small amount of battle-tested code with a fresh mind, the abstractions just come to you > naturally. > > Paolo From cdupontd at redhat.com Thu Feb 28 08:48:53 2019 From: cdupontd at redhat.com (Christophe de Dinechin) Date: Thu, 28 Feb 2019 09:48:53 +0100 Subject: [Rust-VMM] Proposal for contribution and crate approval process In-Reply-To: References: <95B66680-2B2E-4A51-A86F-4F9904838F14@openstack.org> Message-ID: > On 27 Feb 2019, at 13:49, Peng, Chao P wrote: > > Sounds like a good process to start with. > > Meanwhile my feeling is that we need a high-level project-wide design first. The reason is that the creates are not really standalone projects. They are closely related and most likely they will form up a single binary from user' point of view. They should be well designed from the high-level at the very beginning. I understand your concern, but I also disagree about the “single binary” approach. As other have already said, I think the objective is to make crates that are reusable in a variety of binaries. That’s a good way to identify parts that are reusable, and parts that are “details of the implementation”. As a matter of fact, I think it would be interesting to start thinking about small binaries that we could start working on “right away”, i.e. things that could work using only, say, the memory-model crate. The whole rust-vmm project also builds on a lot of earlier designs, experiments and experience. As a result, core design concepts are already “just there” simply by citation. For example, the memory-model crate currently being defined began as a simple extraction of existing code by Jiang. Paolo then started doing a lot of heavy lifting to isolate generic code and concepts. To me, discussing design changes based on real code proposals such as this is an effective way to move forward. > > To me, we'd better design the project from the user point of view: it's a single project as a whole. The division of the codes into creates is OK, but that's just kind of internal thing for the project. Also every designed creates should meet some release criteria otherwise a failure in one creates may result the failure of the whole project. > > There are several things we can start with: > - Discuss #14(https://github.com/rust-vmm/community/issues/14) to come up a 'must have list' that will be included in the first release. > - Come up a project-wide high-level design. For example: > how to divide the creates > how to abstract the common code > how creates interact each other and come up possible traits definition > what interfaces we want to expose to users... > - Discuss how code/doc/test will be organized > - Define process/work flow/governance… What I wrote above does not imply that it’s not a good time to talk about processes and documentation. In particular, Rust brings a lot of things in terms of testing, we should leverage that. Similarly, Rust has a clean approach to modularity, and writing code that is idiomatic in that respect seems quite important. Do we need more about “how to divide the crates”? I’m not sure we do, and I’m not sure we really can at this stage. As an example, Paolo’s current work takes advantage of generic traits as a mechanism to delineate common interfaces that, to a large extent, match an existing implementation. To me, this looks really promising (as in "forward-looking”). I was reviewing Jiang’s code when Paolo shared his intent to go that way, and as soon as he said it, it seemed “obvious” to me that it was the right choice. Of course, I understand that doing a larger redesign like this may cause more disruption if we want to reintegrate these changes in existing projects. But that’s precisely how we can discuss the trade-offs wrt. “how to abstract the common code”, I believe. So I completely agree that we should document what we are doing, but I think it makes sense to document it while we are doing it, for example by reviewing design changes that are currently being proposed, by trying to build small tools around them to see if the interface is lacking in some respect, etc. In a later reply, you also wrote: > As an example, let's say, we will have all the device emulation in one crate, one may argue this can't satisfy their requirement since they want to configure their own device combination. Then let's say, on the other hand, we have a design that each device being a standalone crate. Oh, this is crazy, I guess we will have crate pollution and crate dependency issue. So the right design will be likely a balance between the two. Then in which granularity can we build our device crates? I have no answer, may PCI-related device in one crate or something else for example. At this point the clear requirement should help us on getting the answer. I completely agree with the example and the related concern. On the other hand, I think this is precisely the kind of problem that we are likely to expose by testing and reviewing code more than by writing specifications ahead of time. > But even before those, we need understand and write down our requirement clearly so everybody involved will be on the same page. > > Just my thoughts. Welcome for further discussion. > > Thanks, > Chao >> -----Original Message----- >> From: Jonathan Bryce [mailto:jonathan at openstack.org] >> Sent: Thursday, February 21, 2019 4:26 AM >> To: rust-vmm at lists.opendev.org >> Subject: [Rust-VMM] Proposal for contribution and crate approval process >> >> Hi everyone, >> >> On the rust-vmm community meeting this morning there was a discussion about the approval process for new crates. From the >> discussion a basic proposal emerged: >> >> - Create a group of rust-vmm project-wide maintainers >> - Group size would start out with around 5 individuals >> - Maintainers should come from a variety of backgrounds and affiliations >> - Inclusion of a new crate would require approval from at least 3 maintainers >> - Maintainers should look for approval and feedback from multiple “consumer" communities (e.g. qemu, crosvm, kata, >> firecracker) >> - As the number of crates scale, maintenance at the crate level would be distributed beyond the project-wide group to avoid >> overloading the project-wide maintainers or creating bottlenecks within individual crates >> >> I offered to write this up and post on the list to make sure everyone had a chance to see and comment on it, so please send your >> thoughts/feedback. >> >> If this is agreeable as a process, we’ll need to bootstrap the initial set of maintainers. I have thoughts on that as well, but would love to >> hear others’ opinions too. >> >> Thanks, >> >> Jonathan >> _______________________________________________ >> Rust-vmm mailing list >> Rust-vmm at lists.opendev.org >> http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm From jonathan at openstack.org Thu Feb 28 23:42:49 2019 From: jonathan at openstack.org (Jonathan Bryce) Date: Thu, 28 Feb 2019 17:42:49 -0600 Subject: [Rust-VMM] Project logo/identity Message-ID: <7E68C79D-6AE1-4585-AF16-7BD30DA19AEC@openstack.org> Hi everyone, Someone reached out to Claire with the idea that it might be nice to have a basic logo/identity for the rust-vmm project to use when presenting and talking about what we’re working on here. We have some experience doing this with community input and are happy to create something if people think it would be valuable. As a first step, we like to gather thoughts about what the project and visual identity should represent, so I created a brainstorming etherpad to collect input[1]. When you have a chance share your ideas and perspective on what rust-vmm is about from a mission and philosophy standpoint or what you think would be important concepts to try to represent and communicate. I took a stab at seeding it with some words, but please add whatever comes to mind. At this stage, the more input the better! Next week, we can have some of our designers join the community call and spend a few minutes chatting about it as well. Thanks! Jonathan 1. https://etherpad.openstack.org/p/rust-vmm-identity-brainstorm