From fandree at amazon.com Thu Sep 2 08:47:32 2021 From: fandree at amazon.com (Florescu, Andreea) Date: Thu, 2 Sep 2021 08:47:32 +0000 Subject: [Rust-VMM] Branch rename Message-ID: <1630572452964.86006@amazon.com> Hey everyone, We just completed an effort of renaming the master branch to main for all rust-vmm repositories. This topic was discussed during the last rust-vmm sync meeting as well. Then we decided that we'll mark the master branch as deprecated, and add a new branch called main so we don't break customers that are consuming the master branch [1]. We reverted this decision as we got new input from Rob, letting us know that GitHub provides a redirect from master to main. This simplified the switch significantly (thanks Rob!). Don't forget to also update your local branches to rename master to main, and track the upstream `main` branch now. [1] https://github.com/rust-vmm/community/pull/117 Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fandree at amazon.com Tue Sep 7 07:11:24 2021 From: fandree at amazon.com (Florescu, Andreea) Date: Tue, 7 Sep 2021 07:11:24 +0000 Subject: [Rust-VMM] rust-vmm sync meeting Message-ID: Meeting Agenda: https://etherpad.opendev.org/p/rust-vmm-sync-2021 ==============Conference Bridge Information============== You have been invited to an online meeting, powered by Amazon Chime. Chime meeting ID: 6592165432 Join via Chime clients (manually): Select 'Meetings > Join a Meeting', and enter 6592165432 Join via Chime clients (auto-call): If you invite auto-call as attendee, Chime will call you when the meeting starts, select 'Answer' Join via browser screen share: https://chime.aws/6592165432 Join via phone (US): +1-929-432-4463,,,6592165432# Join via phone (US toll-free): +1-855-552-4463,,,6592165432# International dial-in: https://chime.aws/dialinnumbers/ In-room video system: Ext: 62000, Meeting PIN: 6592165432# ================================================= ================Before your meeting:================ * Learn how to use the touch panel. * Prefer a video? Watch these touch panel how-to videos. * Find out more about room layouts. * Get more information at it.amazon.com/meetings. ================================================ Created with Amazon Meetings (fandree@, edit this series) Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 8450 bytes Desc: not available URL: From fandree at amazon.com Tue Sep 7 09:53:44 2021 From: fandree at amazon.com (Florescu, Andreea) Date: Tue, 7 Sep 2021 09:53:44 +0000 Subject: [Rust-VMM] Reduce OPS Load on Maintaining Repos Message-ID: <1631008424800.67184@amazon.com> Hey folks, We discussed at the sync meeting about the high operational load that we currently have, and the main root cause. One of the pain points is that we have many repositories we need to maintain, which need regular CI updates. There is also an associated cost with merging dependabot PRs. We want to improve the load by working on 3 areas: - [DONE] archive unused repositories; repositories that do not yet have useful code are now archived; repositories to which this applies: io-rate-limiter, vmm-vcpu, vm-allocator, and kvm; once we start working on these crates, we will undo the archive - [NOT STARTED] reduce the number of active repositories by grouping together crates in workspaces (crates will still be published independently, but they'll share the CI); this is currently pending further analysis, and we we'll come back to it in the near future. - [IN PROGRESS] reduce the frequency of dependabot PRs from daily to weekly; this needs configuration updates in all existing repositories. I created a dummy bot that submits PRs with the changes, but there is a bug in the bot that I'll need to address before we can merge them. Grouping repositories and archiving unused repositories helps because with a reduced number of repositories we have less dependabot PRs that we need to review. Also, when changes are required in Buildkite, we have less manual configurations to do. What other improvements do you think would be worth to consider? Thanks, Andreea Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fandree at amazon.com Fri Sep 10 11:12:18 2021 From: fandree at amazon.com (Florescu, Andreea) Date: Fri, 10 Sep 2021 11:12:18 +0000 Subject: [Rust-VMM] Fw: vm-virtio: discuss next steps In-Reply-To: <62fdd519760a4088ad72d573fde08b39@EX13D10EUB003.ant.amazon.com> References: <62fdd519760a4088ad72d573fde08b39@EX13D10EUB003.ant.amazon.com> Message-ID: <1631272337894.19282@amazon.com> FYI. We discussed on Slack about publishing the queue implementation. In this meeting we plan to see what are some appropriate next steps. ________________________________ From: Florescu, Andreea Sent: Friday, September 10, 2021 2:01 PM To: Agache, Alexandru; Loghin, Laura; meet at chime.aws; pin+8949618186 at chime.aws; slp at redhat.com; liuj97 at gmail.com Cc: Dumitru, Catalin-andrei Subject: vm-virtio: discuss next steps When: Monday, September 13, 2021 4:30 PM-5:30 PM. Where: I'm setting up this meeting as we discussed on Slack to discuss about the next steps regarding vm-virtio/queue implementation. Please forward this meeting to anyone else that might be interested in the topic. ==============Conference Bridge Information============== You have been invited to an online meeting, powered by Amazon Chime. Chime meeting ID: 8949618186 Join via Chime clients (manually): Select "Meetings > Join a Meeting", and enter 8949618186 Join via Chime clients (auto-call): If you invite auto-call as attendee, Chime will call you when the meeting starts, select "Answer" Join via browser screen share: https://chime.aws/8949618186 Join via phone (US): +1-929-432-4463,,,8949618186# Join via phone (US toll-free): +1-855-552-4463,,,8949618186# International dial-in: https://chime.aws/dialinnumbers/ In-room video system: Ext: 62000, Meeting PIN: 8949618186# ================================================= ================Before your meeting================ * Learn how to use the touch panel. * Prefer a video? Watch these touch panel how-to videos. * Find out more about room layouts. * Get more information at it.amazon.com/meetings. ================================================ Created with Amazon Meetings (fandree@, edit this meeting) Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3797 bytes Desc: not available URL: From alex.bennee at linaro.org Mon Sep 13 12:44:55 2021 From: alex.bennee at linaro.org (Alex =?utf-8?Q?Benn=C3=A9e?=) Date: Mon, 13 Sep 2021 13:44:55 +0100 Subject: [Rust-VMM] Is it time to start implementing Xen bindings for rust-vmm? Message-ID: <87lf40vay1.fsf@linaro.org> Hi, As we consider the next cycle for Project Stratos I would like to make some more progress on hypervisor agnosticism for our virtio backends. While we have implemented a number of virtio vhost-user backends using C we've rapidly switched to using rust-vmm based ones for virtio-i2c, virtio-rng and virtio-gpio. Given the interest in Rust for implementing backends does it make sense to do some enabling work in rust-vmm to support Xen? There are two chunks of work I can think of: 1. Enough of libxl/hypervisor interface to implement an IOREQ end point. This would require supporting enough of the hypervisor interface to support the implementation of an IOREQ server. We would also need to think about how we would map the IOREQ view of the world into the existing vhost-user interface so we can re-use the current vhost-user backends code base. The two approaches I can think of are: a) implement a vhost-user master that speaks IOREQ to the hypervisor and vhost-user to the vhost-user slave. In this case the bridge would be standing in for something like QEMU. b) implement some variants of the vhost-user slave traits that can talk directly to the hypervisor to get/send the equivalent kick/notify events. I don't know if this might be too complex as the impedance matching between the two interfaces might be too great. This assumes most of the setup is done by the existing toolstack, so the existing libxl tools are used to create, connect and configure the domains before the backend is launched. which leads to: 2. The rest of the libxl/hypervisor interface. This would be the rest of the interface to allow rust-vmm tools to be written that could create, configure and manage Xen domains with pure rust tools. My main concern about this is how rust-vmm's current model (which is very much KVM influenced) will be able to handle the differences for a type-1 hypervisor. Wei's pointed me to the Linux support that was added to expose a Hyper-V control interface via the Linux kernel. While I can see support has been merged on other rust based projects I think the rust-vmm crate is still outstanding: https://github.com/rust-vmm/community/issues/50 and I guess this would need revisiting for Xen to see if the proposed abstraction would scale across other hypervisors. Finally there is the question of how/if any of this would relate to the concept of bare-metal rust backends? We've talked about bare metal backends before but I wonder if the programming model for them is going to be outside the scope of rust-vmm? Would be program just be hardwired to IRQs and be presented a doorbell port to kick or would we want to have at least some of the higher level rust-vmm abstractions for dealing with navigating the virtqueues and responding and filling in data? Thoughts? -- Alex Benn?e From andrew.cooper3 at citrix.com Mon Sep 13 15:32:46 2021 From: andrew.cooper3 at citrix.com (Andrew Cooper) Date: Mon, 13 Sep 2021 16:32:46 +0100 Subject: [Rust-VMM] Is it time to start implementing Xen bindings for rust-vmm? In-Reply-To: <87lf40vay1.fsf@linaro.org> References: <87lf40vay1.fsf@linaro.org> Message-ID: On 13/09/2021 13:44, Alex Benn?e wrote: > Hi, > > As we consider the next cycle for Project Stratos I would like to make > some more progress on hypervisor agnosticism for our virtio backends. > While we have implemented a number of virtio vhost-user backends using C > we've rapidly switched to using rust-vmm based ones for virtio-i2c, > virtio-rng and virtio-gpio. Given the interest in Rust for implementing > backends does it make sense to do some enabling work in rust-vmm to > support Xen? > > There are two chunks of work I can think of: > > 1. Enough of libxl/hypervisor interface to implement an IOREQ end point. No libxl here at all. As of Xen 4.15, there are enough stable interfaces to implement simple IOERQ servers. https://github.com/xapi-project/varstored/commit/fde707c59f7a189e1d4e97c1a4ee1a2d0c378ad1 was the commit where I removed the final unstable interface from varstored (terrible name) which is a dom0 backend for UEFI secure variable handling.? As such, it also serves as a (not totally simple) reference of an IOERQ server. There are a few bits and pieces of rust going on within Xen, and a whole load of plans.? Also, there is a lot of interest from other downstreams in being able to write Rust backends. We've got a placeholder xen and xen-sys crates, and placeholder work for supporting cross-compile as x86 PV and PVH stubdomains. The want to have a simple IOREQ server compiled either as a dom0 backend, or as a PV or PVH stubdomains influences some of the design decisions early on, but they're all no-brainers for the longevity of the work. I started work on trying to reimplement varstored entirely in Rust as a hackathon project, although ran out of time trying to make hypercall buffers work (there is a bug with Box and non-global allocators causing rustc to hit an assert().? In the short term, we'll have to implement hypercall buffers in a slightly more irritating way). Furthermore, stick to the stable hypercalls only.? Xen's C libraries are disaster for cross-version compatibility, and you absolutely do not want to recompile your rust program just to run it against a different version of the hypervisor.? The plan is to start with simple IOREQ servers, which are on fully stable interfaces, then stabilise further hypercalls as necessary to expand functionality. It's high time the Xen Rust working group (which has been talked about for several years now) actually forms... ~Andrew From alex.bennee at linaro.org Tue Sep 14 14:44:01 2021 From: alex.bennee at linaro.org (Alex =?utf-8?Q?Benn=C3=A9e?=) Date: Tue, 14 Sep 2021 15:44:01 +0100 Subject: [Rust-VMM] Is it time to start implementing Xen bindings for rust-vmm? In-Reply-To: References: <87lf40vay1.fsf@linaro.org> Message-ID: <874kanus97.fsf@linaro.org> Andrew Cooper writes: > On 13/09/2021 13:44, Alex Benn?e wrote: >> Hi, >> >> As we consider the next cycle for Project Stratos I would like to make >> some more progress on hypervisor agnosticism for our virtio backends. >> While we have implemented a number of virtio vhost-user backends using C >> we've rapidly switched to using rust-vmm based ones for virtio-i2c, >> virtio-rng and virtio-gpio. Given the interest in Rust for implementing >> backends does it make sense to do some enabling work in rust-vmm to >> support Xen? >> >> There are two chunks of work I can think of: >> >> 1. Enough of libxl/hypervisor interface to implement an IOREQ end point. > > No libxl here at all. > > As of Xen 4.15, there are enough stable interfaces to implement simple > IOERQ servers. > > https://github.com/xapi-project/varstored/commit/fde707c59f7a189e1d4e97c1a4ee1a2d0c378ad1 > was the commit where I removed the final unstable interface from > varstored (terrible name) which is a dom0 backend for UEFI secure > variable handling.? As such, it also serves as a (not totally simple) > reference of an IOERQ server. > > > There are a few bits and pieces of rust going on within Xen, and a whole > load of plans.? Also, there is a lot of interest from other downstreams > in being able to write Rust backends. > > We've got a placeholder xen and xen-sys crates, and placeholder work for > supporting cross-compile as x86 PV and PVH stubdomains. Are these in the rust-vmm project is elsewhere? > The want to have a simple IOREQ server compiled either as a dom0 > backend, or as a PV or PVH stubdomains influences some of the design > decisions early on, but they're all no-brainers for the longevity of the > work. Just to clarify nomenclature is a PVH stubdomain what I'm referring to as a bare metal backend, i.e: a unikernel or RTOS image that implements the backend without having to transition between some sort of userspace and it's supporting kernel? > I started work on trying to reimplement varstored entirely in Rust as a > hackathon project, although ran out of time trying to make hypercall > buffers work (there is a bug with Box and non-global allocators causing > rustc to hit an assert().? In the short term, we'll have to implement > hypercall buffers in a slightly more irritating way). > > Furthermore, stick to the stable hypercalls only.? Xen's C libraries are > disaster for cross-version compatibility, and you absolutely do not want > to recompile your rust program just to run it against a different > version of the hypervisor.? The plan is to start with simple IOREQ > servers, which are on fully stable interfaces, then stabilise further > hypercalls as necessary to expand functionality. Are the hypercalls mediated by a kernel layer or are you making direct HVC calls (on ARM) to the hypervisor from userspace? Where would I look in the Xen code to find the hypercalls that are considered stable and won't change between versions? > It's high time the Xen Rust working group (which has been talked about > for several years now) actually forms... Indeed part of the purpose of this email was to smoke out those who are interested in the intersection of Xen, Rust and VirtIO ;-) -- Alex Benn?e From andrew.cooper3 at citrix.com Tue Sep 14 18:42:34 2021 From: andrew.cooper3 at citrix.com (Andrew Cooper) Date: Tue, 14 Sep 2021 19:42:34 +0100 Subject: [Rust-VMM] Is it time to start implementing Xen bindings for rust-vmm? In-Reply-To: <874kanus97.fsf@linaro.org> References: <87lf40vay1.fsf@linaro.org> <874kanus97.fsf@linaro.org> Message-ID: <188afb35-54c1-9a52-19f1-867cea4487ea@citrix.com> On 14/09/2021 15:44, Alex Benn?e wrote: > Andrew Cooper writes: > >> On 13/09/2021 13:44, Alex Benn?e wrote: >>> Hi, >>> >>> As we consider the next cycle for Project Stratos I would like to make >>> some more progress on hypervisor agnosticism for our virtio backends. >>> While we have implemented a number of virtio vhost-user backends using C >>> we've rapidly switched to using rust-vmm based ones for virtio-i2c, >>> virtio-rng and virtio-gpio. Given the interest in Rust for implementing >>> backends does it make sense to do some enabling work in rust-vmm to >>> support Xen? >>> >>> There are two chunks of work I can think of: >>> >>> 1. Enough of libxl/hypervisor interface to implement an IOREQ end point. >> No libxl here at all. >> >> As of Xen 4.15, there are enough stable interfaces to implement simple >> IOERQ servers. >> >> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fxapi-project%2Fvarstored%2Fcommit%2Ffde707c59f7a189e1d4e97c1a4ee1a2d0c378ad1&data=04%7C01%7CAndrew.Cooper3%40citrix.com%7C08a3fe14704a4d6888cf08d9778ee5b2%7C335836de42ef43a2b145348c2ee9ca5b%7C0%7C0%7C637672277905441489%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2B1pKhIuqzCGkgYD4snd6jnxjoEJzrCgUdol%2FfA2kwOk%3D&reserved=0 >> was the commit where I removed the final unstable interface from >> varstored (terrible name) which is a dom0 backend for UEFI secure >> variable handling.? As such, it also serves as a (not totally simple) >> reference of an IOERQ server. >> >> >> There are a few bits and pieces of rust going on within Xen, and a whole >> load of plans.? Also, there is a lot of interest from other downstreams >> in being able to write Rust backends. >> >> We've got a placeholder xen and xen-sys crates, and placeholder work for >> supporting cross-compile as x86 PV and PVH stubdomains. > Are these in the rust-vmm project is elsewhere? https://crates.io/crates/xen-sys When I say placeholder, I really do mean placeholder. To start this work meaningfully, we'd want to make a repo (or several) in the xen-project organisation on github or gitlab (we have both, for reasons), and set these as the upstream of the xen and xen-sys crates. >> The want to have a simple IOREQ server compiled either as a dom0 >> backend, or as a PV or PVH stubdomains influences some of the design >> decisions early on, but they're all no-brainers for the longevity of the >> work. > Just to clarify nomenclature is a PVH stubdomain what I'm referring to > as a bare metal backend, i.e: a unikernel or RTOS image that implements > the backend without having to transition between some sort of userspace > and it's supporting kernel? I think so, yes, although calling it "bare metal" seems misleading for something which is a VM targetted at a specific hypervisor... >> I started work on trying to reimplement varstored entirely in Rust as a >> hackathon project, although ran out of time trying to make hypercall >> buffers work (there is a bug with Box and non-global allocators causing >> rustc to hit an assert().? In the short term, we'll have to implement >> hypercall buffers in a slightly more irritating way). >> >> Furthermore, stick to the stable hypercalls only.? Xen's C libraries are >> disaster for cross-version compatibility, and you absolutely do not want >> to recompile your rust program just to run it against a different >> version of the hypervisor.? The plan is to start with simple IOREQ >> servers, which are on fully stable interfaces, then stabilise further >> hypercalls as necessary to expand functionality. > Are the hypercalls mediated by a kernel layer or are you making direct > HVC calls (on ARM) to the hypervisor from userspace? For a dom0 backends irrespective of architecture, you need to issue ioctl()'s on the appropriate kernel device. For a PV/PVH stubdom, you should make a call into the hypercall_page https://xenbits.xen.org/docs/latest/guest-guide/x86/hypercall-abi.html because Intel and AMD used different instructions for their equivalent of HVC. ARM doesn't have the hypercall page ABI, so I'd expect the hypercall implementation to expand to HVC directly. In terms of rust APIs, we'd want a crate which has target-specific implementations so the caller need not worry about the implementation details in the common case. > > Where would I look in the Xen code to find the hypercalls that are > considered stable and won't change between versions? I'm afraid that's mostly in developers heads right now. For a first pass, you can look for __XEN_TOOLS__? (This is mis-named, and ought to be called __XEN_UNSTABLE_INTERFACE__, because...) but be aware that some things currently tagged __XEN_TOOLS__ are incorrect and are in fact stable. As a first pass, assume everything is unstable.? The things contained within libxendevicemodel and libxenforeignmem are stable and were specifically made so to try and get simple IOREQ server functionality done and stable. Almost everything else, particularly concerning the toolstack operations, is unstable.? There is 15 years of organic growth and dubious decisions here, and they need unpicking carefully.? We've got some hypercalls which look like they're unstable, but are actually stable (as they were exposed to guests), and therefore have ridiculous interfaces. The "ABI v2" work is massive and complicated, and picking at some of the corners based on "what is needed to make new $FOO work" is a good way to make some inroads. >> It's high time the Xen Rust working group (which has been talked about >> for several years now) actually forms... > Indeed part of the purpose of this email was to smoke out those who are > interested in the intersection of Xen, Rust and VirtIO ;-) The conversation has come up quite a few times in the past, but mostly by people who are also busy with other things. ~Andrew From sstabellini at kernel.org Tue Sep 14 21:17:56 2021 From: sstabellini at kernel.org (Stefano Stabellini) Date: Tue, 14 Sep 2021 14:17:56 -0700 (PDT) Subject: [Rust-VMM] Is it time to start implementing Xen bindings for rust-vmm? In-Reply-To: <188afb35-54c1-9a52-19f1-867cea4487ea@citrix.com> References: <87lf40vay1.fsf@linaro.org> <874kanus97.fsf@linaro.org> <188afb35-54c1-9a52-19f1-867cea4487ea@citrix.com> Message-ID: On Tue, 14 Sep 2021, Andrew Cooper wrote: > On 14/09/2021 15:44, Alex Benn?e wrote: > > Andrew Cooper writes: > > > >> On 13/09/2021 13:44, Alex Benn?e wrote: > >>> Hi, > >>> > >>> As we consider the next cycle for Project Stratos I would like to make > >>> some more progress on hypervisor agnosticism for our virtio backends. > >>> While we have implemented a number of virtio vhost-user backends using C > >>> we've rapidly switched to using rust-vmm based ones for virtio-i2c, > >>> virtio-rng and virtio-gpio. Given the interest in Rust for implementing > >>> backends does it make sense to do some enabling work in rust-vmm to > >>> support Xen? > >>> > >>> There are two chunks of work I can think of: > >>> > >>> 1. Enough of libxl/hypervisor interface to implement an IOREQ end point. > >> No libxl here at all. > >> > >> As of Xen 4.15, there are enough stable interfaces to implement simple > >> IOERQ servers. > >> > >> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fxapi-project%2Fvarstored%2Fcommit%2Ffde707c59f7a189e1d4e97c1a4ee1a2d0c378ad1&data=04%7C01%7CAndrew.Cooper3%40citrix.com%7C08a3fe14704a4d6888cf08d9778ee5b2%7C335836de42ef43a2b145348c2ee9ca5b%7C0%7C0%7C637672277905441489%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2B1pKhIuqzCGkgYD4snd6jnxjoEJzrCgUdol%2FfA2kwOk%3D&reserved=0 > >> was the commit where I removed the final unstable interface from > >> varstored (terrible name) which is a dom0 backend for UEFI secure > >> variable handling.? As such, it also serves as a (not totally simple) > >> reference of an IOERQ server. > >> > >> > >> There are a few bits and pieces of rust going on within Xen, and a whole > >> load of plans.? Also, there is a lot of interest from other downstreams > >> in being able to write Rust backends. > >> > >> We've got a placeholder xen and xen-sys crates, and placeholder work for > >> supporting cross-compile as x86 PV and PVH stubdomains. > > Are these in the rust-vmm project is elsewhere? > > https://crates.io/crates/xen-sys > > When I say placeholder, I really do mean placeholder. > > To start this work meaningfully, we'd want to make a repo (or several) > in the xen-project organisation on github or gitlab (we have both, for > reasons), and set these as the upstream of the xen and xen-sys crates. > > >> The want to have a simple IOREQ server compiled either as a dom0 > >> backend, or as a PV or PVH stubdomains influences some of the design > >> decisions early on, but they're all no-brainers for the longevity of the > >> work. > > Just to clarify nomenclature is a PVH stubdomain what I'm referring to > > as a bare metal backend, i.e: a unikernel or RTOS image that implements > > the backend without having to transition between some sort of userspace > > and it's supporting kernel? > > I think so, yes, although calling it "bare metal" seems misleading for > something which is a VM targetted at a specific hypervisor... > > > >> I started work on trying to reimplement varstored entirely in Rust as a > >> hackathon project, although ran out of time trying to make hypercall > >> buffers work (there is a bug with Box and non-global allocators causing > >> rustc to hit an assert().? In the short term, we'll have to implement > >> hypercall buffers in a slightly more irritating way). > >> > >> Furthermore, stick to the stable hypercalls only.? Xen's C libraries are > >> disaster for cross-version compatibility, and you absolutely do not want > >> to recompile your rust program just to run it against a different > >> version of the hypervisor.? The plan is to start with simple IOREQ > >> servers, which are on fully stable interfaces, then stabilise further > >> hypercalls as necessary to expand functionality. > > Are the hypercalls mediated by a kernel layer or are you making direct > > HVC calls (on ARM) to the hypervisor from userspace? > > For a dom0 backends irrespective of architecture, you need to issue > ioctl()'s on the appropriate kernel device. > > For a PV/PVH stubdom, you should make a call into the hypercall_page > https://xenbits.xen.org/docs/latest/guest-guide/x86/hypercall-abi.html > because Intel and AMD used different instructions for their equivalent > of HVC. > > ARM doesn't have the hypercall page ABI, so I'd expect the hypercall > implementation to expand to HVC directly. See for example arch/arm64/xen/hypercall.S in Linux From dwmw2 at infradead.org Wed Sep 22 12:03:52 2021 From: dwmw2 at infradead.org (David Woodhouse) Date: Wed, 22 Sep 2021 13:03:52 +0100 Subject: [Rust-VMM] Is it time to start implementing Xen bindings for rust-vmm? In-Reply-To: <87lf40vay1.fsf@linaro.org> References: <87lf40vay1.fsf@linaro.org> Message-ID: <04272e87a8939be46acddd3c75bbffa84b0a40c1.camel@infradead.org> On Mon, 2021-09-13 at 13:44 +0100, Alex Benn?e wrote: > Hi, > > As we consider the next cycle for Project Stratos I would like to make > some more progress on hypervisor agnosticism for our virtio backends. > While we have implemented a number of virtio vhost-user backends using C > we've rapidly switched to using rust-vmm based ones for virtio-i2c, > virtio-rng and virtio-gpio. Given the interest in Rust for implementing > backends does it make sense to do some enabling work in rust-vmm to > support Xen? I like this idea. Somewhat separately, Alex Agache has already started some preliminary hacking on supporting Xen guests within rust-vmm (on top of Linux/KVM): https://github.com/alexandruag/vmm-reference/commits/xen Being able to run on *actual* Xen would be good too. And we should also aspire to do guest-transparent live migration between the two hosting environments. Where relevant, it would be great to be able to share components (like emulation of the Xen PCI platform device, a completely single-tenant XenStore implementation dedicated to a single guest, perhaps PV netback/blkback and other things). -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5174 bytes Desc: not available URL: From fandree at amazon.com Wed Sep 22 12:10:18 2021 From: fandree at amazon.com (Florescu, Andreea) Date: Wed, 22 Sep 2021 12:10:18 +0000 Subject: [Rust-VMM] rust-vmm sync meeting Message-ID: <5b4396f9ba13440489b231fe11c1e579@EX13D10EUB003.ant.amazon.com> Update: refreshing the series. Meeting Agenda: https://etherpad.opendev.org/p/rust-vmm-sync-2021 ==============Conference Bridge Information============== You have been invited to an online meeting, powered by Amazon Chime. Chime meeting ID: 6592165432 Join via Chime clients (manually): Select 'Meetings > Join a Meeting', and enter 6592165432 Join via Chime clients (auto-call): If you invite auto-call as attendee, Chime will call you when the meeting starts, select 'Answer' Join via browser screen share: https://chime.aws/6592165432 Join via phone (US): +1-929-432-4463,,,6592165432# Join via phone (US toll-free): +1-855-552-4463,,,6592165432# International dial-in: https://chime.aws/dialinnumbers/ In-room video system: Ext: 62000, Meeting PIN: 6592165432# ================================================= ================Before your meeting:================ * Learn how to use the touch panel. * Prefer a video? Watch these touch panel how-to videos. * Find out more about room layouts. * Get more information at it.amazon.com/meetings. ================================================ Created with Amazon Meetings (fandree@, edit this series) Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 9401 bytes Desc: not available URL: From alex.bennee at linaro.org Wed Sep 22 17:44:41 2021 From: alex.bennee at linaro.org (Alex =?utf-8?Q?Benn=C3=A9e?=) Date: Wed, 22 Sep 2021 18:44:41 +0100 Subject: [Rust-VMM] Is it time to start implementing Xen bindings for rust-vmm? In-Reply-To: <04272e87a8939be46acddd3c75bbffa84b0a40c1.camel@infradead.org> References: <87lf40vay1.fsf@linaro.org> <04272e87a8939be46acddd3c75bbffa84b0a40c1.camel@infradead.org> Message-ID: <871r5go5xq.fsf@linaro.org> David Woodhouse writes: > [[S/MIME Signed Part:Undecided]] > On Mon, 2021-09-13 at 13:44 +0100, Alex Benn?e wrote: >> Hi, >> >> As we consider the next cycle for Project Stratos I would like to make >> some more progress on hypervisor agnosticism for our virtio backends. >> While we have implemented a number of virtio vhost-user backends using C >> we've rapidly switched to using rust-vmm based ones for virtio-i2c, >> virtio-rng and virtio-gpio. Given the interest in Rust for implementing >> backends does it make sense to do some enabling work in rust-vmm to >> support Xen? > > I like this idea. > > Somewhat separately, Alex Agache has already started some preliminary > hacking on supporting Xen guests within rust-vmm (on top of Linux/KVM): > https://github.com/alexandruag/vmm-reference/commits/xen I'll be sending along a more detailed post once I've finished my work breakdown but I'm currently envisioning two parts. A xen-sys crate for the low level access that supports both ioctl and hypercall calls. This would be useful for other projects such as stubdomains (think a "bare-metal" RTOS with some sort of backend, uni-kernel style). It would also be the lowest layer that rust-vmm can use to interact with the hypervisor. I'm aware the HyperV solution is to present a KVM-like ioctl interface via the host kernel. However if we want generality with type-1 hypervisors we can't assume all will get suitable translation layers in the kernel. Fortunately for the time being our focus is on virtio backends so we don't need to get directly involved in the hypervisor run loop... for now. > Being able to run on *actual* Xen would be good too. And we should also > aspire to do guest-transparent live migration between the two hosting > environments. > > Where relevant, it would be great to be able to share components (like > emulation of the Xen PCI platform device, a completely single-tenant > XenStore implementation dedicated to a single guest, perhaps PV > netback/blkback and other things). For Stratos portable virtio backends is one of our project goals. > > > [[End of S/MIME Signed Part]] -- Alex Benn?e From alex.bennee at linaro.org Fri Sep 24 16:02:46 2021 From: alex.bennee at linaro.org (Alex =?utf-8?Q?Benn=C3=A9e?=) Date: Fri, 24 Sep 2021 17:02:46 +0100 Subject: [Rust-VMM] Xen Rust VirtIO demos work breakdown for Project Stratos Message-ID: <87pmsylywy.fsf@linaro.org> Hi, The following is a breakdown (as best I can figure) of the work needed to demonstrate VirtIO backends in Rust on the Xen hypervisor. It requires work across a number of projects but notably core rust and virtio enabling in the Xen project (building on the work EPAM has already done) and the start of enabling rust-vmm crate to work with Xen. The first demo is a fairly simple toy to exercise the direct hypercall approach for a unikernel backend. On it's own it isn't super impressive but hopefully serves as a proof of concept for the idea of having backends running in a single exception level where latency will be important. The second is a much more ambitious bridge between Xen and vhost-user to allow for re-use of the existing vhost-user backends with the bridge acting as a proxy for what would usually be a full VMM in the type-2 hypervisor case. With that in mind the rust-vmm work is only aimed at doing the device emulation and doesn't address the larger question of how type-1 hypervisors can be integrated into the rust-vmm hypervisor model. A quick note about the estimates. They are exceedingly rough guesses plucked out of the air and I would be grateful for feedback from the appropriate domain experts on if I'm being overly optimistic or pessimistic. The links to the Stratos JIRA should be at least read accessible to all although they contain the same information as the attached document (albeit with nicer PNG renderings of my ASCII art ;-). There is a Stratos sync-up call next Thursday: https://calendar.google.com/event?action=TEMPLATE&tmeid=MWpidm5lbzM5NjlydnAxdWxvc2s4aGI0ZGpfMjAyMTA5MzBUMTUwMDAwWiBjX2o3bmdpMW84cmxvZmtwZWQ0cjVjaDk4bXZnQGc&tmsrc=c_j7ngi1o8rlofkped4r5ch98mvg%40group.calendar.google.com and I'm sure there will also be discussion in the various projects (hence the wide CC list). The Stratos calls are open to anyone who wants to attend and we welcome feedback from all who are interested. So on with the work breakdown: ??????????????????????????????? STRATOS PLANNING FOR 21 TO 22 Alex Benn?e ??????????????????????????????? Table of Contents ????????????????? 1. Xen Rust Bindings ([STR-51]) .. 1. Upstream an "official" rust crate for Xen ([STR-52]) .. 2. Basic Hypervisor Interactions hypercalls ([STR-53]) .. 3. [#10] Access to XenStore service ([STR-54]) .. 4. VirtIO support hypercalls ([STR-55]) 2. Xen Hypervisor Support for Stratos ([STR-56]) .. 1. Stable ABI for foreignmemory mapping to non-dom0 ([STR-57]) .. 2. Tweaks to tooling to launch VirtIO guests 3. rust-vmm support for Xen VirtIO ([STR-59]) .. 1. Make vm-memory Xen aware ([STR-60]) .. 2. Xen IO notification and IRQ injections ([STR-61]) 4. Stratos Demos .. 1. Rust based stubdomain monitor ([STR-62]) .. 2. Xen aware vhost-user master ([STR-63]) 1 Xen Rust Bindings ([STR-51]) ?????????????????????????????? There exists a [placeholder repository] with the start of a set of x86_64 bindings for Xen and a very basic hello world uni-kernel example. This forms the basis of the initial Xen Rust work and will be available as a [xen-sys crate] via cargo. [STR-51] [placeholder repository] [xen-sys crate] 1.1 Upstream an "official" rust crate for Xen ([STR-52]) ???????????????????????????????????????????????????????? To start with we will want an upstream location for future work to be based upon. The intention is the crate is independent of the version of Xen it runs on (above the baseline version chosen). This will entail: ? ? agreeing with upstream the name/location for the source ? ? documenting the rules for the "stable" hypercall ABI ? ? establish an internal interface to elide between ioctl mediated and direct hypercalls ? ? ensure the crate is multi-arch and has feature parity for arm64 As such we expect the implementation to be standalone, i.e. not wrapping the existing Xen libraries for mediation. There should be a close (1-to-1) mapping between the interfaces in the crate and the eventual hypercall made to the hypervisor. Estimate: 4w (elapsed likely longer due to discussion) [STR-52] 1.2 Basic Hypervisor Interactions hypercalls ([STR-53]) ??????????????????????????????????????????????????????? These are the bare minimum hypercalls implemented as both ioctl and direct calls. These allow for a very basic binary to: ? ? console_io - output IO via the Xen console ? ? domctl stub - basic stub for domain control (different API?) ? ? sysctl stub - basic stub for system control (different API?) The idea would be this provides enough hypercall interface to query the list of domains and output their status via the xen console. There is an open question about if the domctl and sysctl hypercalls are way to go. Estimate: 6w [STR-53] 1.3 [#10] Access to XenStore service ([STR-54]) ??????????????????????????????????????????????? This is a shared configuration storage space accessed via either Unix sockets (on dom0) or via the Xenbus. This is used to access configuration information for the domain. Is this needed for a backend though? Can everything just be passed direct on the command line? Estimate: 4w [STR-54] 1.4 VirtIO support hypercalls ([STR-55]) ???????????????????????????????????????? These are the hypercalls that need to be implemented to support a VirtIO backend. This includes the ability to map another guests memory into the current domains address space, register to receive IOREQ events when the guest knocks at the doorbell and inject kicks into the guest. The hypercalls we need to support would be: ? ? dmop - device model ops (*_ioreq_server, setirq, nr_vpus) ? ? foreignmemory - map and unmap guest memory The DMOP space is larger than what we need for an IOREQ backend so I've based it just on what arch/arm/dm.c exports which is the subset introduced for EPAM's virtio work. Estimate: 12w [STR-55] 2 Xen Hypervisor Support for Stratos ([STR-56]) ??????????????????????????????????????????????? These tasks include tasks needed to support the various different deployments of Stratos components in Xen. [STR-56] 2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57]) ??????????????????????????????????????????????????????????????? Currently the foreign memory mapping support only works for dom0 due to reference counting issues. If we are to support backends running in their own domains this will need to get fixed. Estimate: 8w [STR-57] 2.2 Tweaks to tooling to launch VirtIO guests ????????????????????????????????????????????? There might not be too much to do here. The EPAM work already did something similar for their PoC for virtio-block. Essentially we need to ensure: ? ? DT bindings are passed to the guest for virtio-mmio device discovery ? ? Our rust backend can be instantiated before the domU is launched This currently assumes the tools and the backend are running in dom0. Estimate: 4w 3 rust-vmm support for Xen VirtIO ([STR-59]) ???????????????????????????????????????????? This encompasses the tasks required to get a vhost-user server up and running while interfacing to the Xen hypervisor. This will require the xen-sys.rs crate for the actual interface to the hypervisor. We need to work out how a Xen configuration option would be passed to the various bits of rust-vmm when something is being built. [STR-59] 3.1 Make vm-memory Xen aware ([STR-60]) ??????????????????????????????????????? The vm-memory crate is the root crate for abstracting access to the guests memory. It currently has multiple configuration builds to handle difference between mmap on Windows and Unix. Although mmap isn't directly exposed the public interfaces support a mmap like interface. We would need to: ? ? work out how to expose foreign memory via the vm-memory mechanism I'm not sure if this just means implementing the GuestMemory trait for a GuestMemoryXen or if we need to present a mmap like interface. Estimate: 8w [STR-60] 3.2 Xen IO notification and IRQ injections ([STR-61]) ????????????????????????????????????????????????????? The KVM world provides for ioeventfd (notifications) and irqfd (injection) to signal asynchronously between the guest and the backend. As far a I can tell this is currently handled inside the various VMMs which assume a KVM backend. While the vhost-user slave code doesn't see the register_ioevent/register_irqfd events it does deal with EventFDs throughout the code. Perhaps the best approach here would be to create a IOREQ crate that can create EventFD descriptors which can then be passed to the slaves to use for notification and injection. Otherwise there might be an argument for a new crate that can encapsulate this behaviour for both KVM/ioeventd and Xen/IOREQ setups? Estimate: 8w? [STR-61] 4 Stratos Demos ??????????????? These tasks cover the creation of demos that brig together all the previous bits of work to demonstrate a new area of capability that has been opened up by Stratos work. 4.1 Rust based stubdomain monitor ([STR-62]) ???????????????????????????????????????????? This is a basic demo that is a proof of concept for a unikernel style backend written in pure Rust. This work would be a useful precursor for things such as the RTOS Dom0 on a safety island ([STR-11]) or as a carrier for the virtio-scmi backend. The monitor program will periodically poll the state of the other domains and echo their status to the Xen console. Estimate: 4w #+name: stub-domain-example #+begin_src ditaa :cmdline -o :file stub_domain_example.png Dom0 | DomU | DomStub | | : /-------------\ : | |cPNK | | | | | | | | | | /------------------------------------\ | | GuestOS | | |cPNK | | | | | EL0 | Dom0 Userspace (xl tools, QEMU) | | | | | /---------------\ | | | | | | |cYEL | \------------------------------------/ | | | | | | +------------------------------------+ | | | | | Rust Monitor | EL1 |cA1B Dom0 Kernel | | | | | | | +------------------------------------+ | \-------------/ | \---------------/ -------------------------------------------------------------------------------=------------------ +-------------------------------------------------------------------------------------+ EL2 |cC02 Xen Hypervisor | +-------------------------------------------------------------------------------------+ #+end_src [STR-62] [STR-11] 4.2 Xen aware vhost-user master ([STR-63]) ?????????????????????????????????????????? Usually the master side of a vhost-user system is embedded directly in the VMM itself. However in a Xen deployment their is no overarching VMM but a series of utility programs that query the hypervisor directly. The Xen tooling is also responsible for setting up any support processes that are responsible for emulating HW for the guest. The task aims to bridge the gap between Xen's normal HW emulation path (ioreq) and VirtIO's userspace device emulation (vhost-user). The process would be started with some information on where the virtio-mmio address space is and what the slave binary will be. It will then: ? map the guest into Dom0 userspace and attach to a MemFD ? register the appropriate memory regions as IOREQ regions with Xen ? create EventFD channels for the virtio kick notifications (one each way) ? spawn the vhost-user slave process and mediate the notifications and kicks between the slave and Xen itself #+name: xen-vhost-user-master #+begin_src ditaa :cmdline -o :file xen_vhost_user_master.png Dom0 DomU | | | | | | +-------------------+ +-------------------+ | | |----------->| | | | vhost-user | vhost-user | vhost-user | : /------------------------------------\ | slave | protocol | master | | | | | (existing) |<-----------| (rust) | | | | +-------------------+ +-------------------+ | | | ^ ^ | ^ | | Guest Userspace | | | | | | | | | | | IOREQ | | | | | | | | | | | v v V | | \------------------------------------/ +---------------------------------------------------+ | +------------------------------------+ | ^ ^ | ioctl ^ | | | | | | iofd/irqfd eventFD | | | | | | Guest Kernel | | +---------------------------+ | | | | | +-------------+ | | | | | | | | virtio-dev | | | Host Kernel V | | | | +-------------+ | +---------------------------------------------------+ | +------------------------------------+ | ^ | | ^ | hyper | | | ----------------------=------------- | -=--- | ----=------ | -----=- | --------=------------------ | call | Trap | | IRQ V | V | +-------------------------------------------------------------------------------------+ | | ^ | ^ | | | +-------------+ | | EL2 | Xen Hypervisor | | | | +-------------------------------+ | | | +-------------------------------------------------------------------------------------+ #+end_src [STR-63] -- Alex Benn?e From marmarek at invisiblethingslab.com Fri Sep 24 23:59:23 2021 From: marmarek at invisiblethingslab.com (Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?=) Date: Sat, 25 Sep 2021 01:59:23 +0200 Subject: [Rust-VMM] Xen Rust VirtIO demos work breakdown for Project Stratos In-Reply-To: <87pmsylywy.fsf@linaro.org> References: <87pmsylywy.fsf@linaro.org> Message-ID: On Fri, Sep 24, 2021 at 05:02:46PM +0100, Alex Benn?e wrote: > Hi, Hi, > 2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57]) > ??????????????????????????????????????????????????????????????? > > Currently the foreign memory mapping support only works for dom0 due > to reference counting issues. If we are to support backends running in > their own domains this will need to get fixed. > > Estimate: 8w > > > [STR-57] I'm pretty sure it was discussed before, but I can't find relevant (part of) thread right now: does your model assumes the backend (running outside of dom0) will gain ability to map (or access in other way) _arbitrary_ memory page of a frontend domain? Or worse: any domain? That is a significant regression in terms of security model Xen provides. It would give the backend domain _a lot more_ control over the system that it normally has with Xen PV drivers - negating significant part of security benefits of using driver domains. So, does the above require frontend agreeing (explicitly or implicitly) for accessing specific pages by the backend? There were several approaches to that discussed, including using grant tables (as PV drivers do), vIOMMU(?), or even drastically different model with no shared memory at all (Argo). Can you clarify which (if any) approach your attempt of VirtIO on Xen will use? A more general idea: can we collect info on various VirtIO on Xen approaches (since there is more than one) in a single place, including: - key characteristics, differences - who is involved - status - links to relevant threads, maybe I'd propose to revive https://wiki.xenproject.org/wiki/Virtio_On_Xen -- Best Regards, Marek Marczykowski-G?recki Invisible Things Lab -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From alex.bennee at linaro.org Mon Sep 27 09:50:56 2021 From: alex.bennee at linaro.org (Alex =?utf-8?Q?Benn=C3=A9e?=) Date: Mon, 27 Sep 2021 10:50:56 +0100 Subject: [Rust-VMM] Xen Rust VirtIO demos work breakdown for Project Stratos In-Reply-To: References: <87pmsylywy.fsf@linaro.org> Message-ID: <874ka68h96.fsf@linaro.org> Marek Marczykowski-G?recki writes: > [[PGP Signed Part:Undecided]] > On Fri, Sep 24, 2021 at 05:02:46PM +0100, Alex Benn?e wrote: >> Hi, > > Hi, > >> 2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57]) >> ??????????????????????????????????????????????????????????????? >> >> Currently the foreign memory mapping support only works for dom0 due >> to reference counting issues. If we are to support backends running in >> their own domains this will need to get fixed. >> >> Estimate: 8w >> >> >> [STR-57] > > I'm pretty sure it was discussed before, but I can't find relevant > (part of) thread right now: does your model assumes the backend (running > outside of dom0) will gain ability to map (or access in other way) > _arbitrary_ memory page of a frontend domain? Or worse: any domain? The aim is for some DomU's to host backends for other DomU's instead of all backends being in Dom0. Those backend DomU's would have to be considered trusted because as you say the default memory model of VirtIO is to have full access to the frontend domains memory map. > That is a significant regression in terms of security model Xen > provides. It would give the backend domain _a lot more_ control over the > system that it normally has with Xen PV drivers - negating significant > part of security benefits of using driver domains. It's part of the continual trade off between security and speed. For things like block and network backends there is a penalty if data has to be bounce buffered before it ends up in the guest address space. > So, does the above require frontend agreeing (explicitly or implicitly) > for accessing specific pages by the backend? There were several > approaches to that discussed, including using grant tables (as PV > drivers do), vIOMMU(?), or even drastically different model with no > shared memory at all (Argo). Can you clarify which (if any) approach > your attempt of VirtIO on Xen will use? There are separate strands of work in Stratos looking at how we could further secure VirtIO for architectures with distributed backends (e.g. you may accept the block backend having access to the whole of memory but an i2c multiplexer has different performance characteristics). Currently the only thing we have prototyped is "fat virtqueues" which Arnd has been working on. Here the only actual shared memory required is the VirtIO config space and the relevant virt queues. Other approaches have been discussed including using the virtio-iommu to selectively make areas available to the backend or use memory zoning so for example network buffers are only allocated in a certain region of guest physical memory that is shared with the backend. > A more general idea: can we collect info on various VirtIO on Xen > approaches (since there is more than one) in a single place, including: > - key characteristics, differences > - who is involved > - status > - links to relevant threads, maybe > > I'd propose to revive https://wiki.xenproject.org/wiki/Virtio_On_Xen >From the Stratos point of view Xen is a useful proving ground for general VirtIO experimentation due to being both a type-1 and open source. Our ultimate aim is have a high degree of code sharing for backends regardless of the hypervisor choice so a guest can use a VirtIO device model without having to be locked into KVM. If your technology choice is already fixed with a Xen hypervisor and portability isn't a concern you might well just stick to the existing well tested Xen PV interfaces. -- Alex Benn?e From fandree at amazon.com Mon Sep 27 14:10:30 2021 From: fandree at amazon.com (Florescu, Andreea) Date: Mon, 27 Sep 2021 14:10:30 +0000 Subject: [Rust-VMM] rust-vmm sync meeting Message-ID: <30ed7518c4b74e35a1d2a672abadedb9@EX13D10EUB003.ant.amazon.com> Update: refreshing the series. Meeting Agenda: https://etherpad.opendev.org/p/rust-vmm-sync-2021 ==============Conference Bridge Information============== You have been invited to an online meeting, powered by Amazon Chime. Chime meeting ID: 6592165432 Join via Chime clients (manually): Select 'Meetings > Join a Meeting', and enter 6592165432 Join via Chime clients (auto-call): If you invite auto-call as attendee, Chime will call you when the meeting starts, select 'Answer' Join via browser screen share: https://chime.aws/6592165432 Join via phone (US): +1-929-432-4463,,,6592165432# Join via phone (US toll-free): +1-855-552-4463,,,6592165432# International dial-in: https://chime.aws/dialinnumbers/ In-room video system: Ext: 62000, Meeting PIN: 6592165432# ================================================= ================Before your meeting:================ * Learn how to use the touch panel. * Prefer a video? Watch these touch panel how-to videos. * Find out more about room layouts. * Get more information at it.amazon.com/meetings. ================================================ Created with Amazon Meetings (fandree@, edit this series) Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 9660 bytes Desc: not available URL: From olekstysh at gmail.com Mon Sep 27 17:25:35 2021 From: olekstysh at gmail.com (Oleksandr) Date: Mon, 27 Sep 2021 20:25:35 +0300 Subject: [Rust-VMM] Xen Rust VirtIO demos work breakdown for Project Stratos In-Reply-To: <87pmsylywy.fsf@linaro.org> References: <87pmsylywy.fsf@linaro.org> Message-ID: <47bce3dd-d271-1688-d445-43eee667ade3@gmail.com> On 24.09.21 19:02, Alex Benn?e wrote: Hi Alex [snip] > > [STR-56] > > 2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57]) > ??????????????????????????????????????????????????????????????? > > Currently the foreign memory mapping support only works for dom0 due > to reference counting issues. If we are to support backends running in > their own domains this will need to get fixed. > > Estimate: 8w > > > [STR-57] If I got this paragraph correctly, this is already fixed on Arm [1] [1] https://lore.kernel.org/xen-devel/1611884932-1851-17-git-send-email-olekstysh at gmail.com/ [snip] -- Regards, Oleksandr Tyshchenko From christopher.w.clark at gmail.com Tue Sep 28 05:55:40 2021 From: christopher.w.clark at gmail.com (Christopher Clark) Date: Mon, 27 Sep 2021 22:55:40 -0700 Subject: [Rust-VMM] [Stratos-dev] Xen Rust VirtIO demos work breakdown for Project Stratos In-Reply-To: <874ka68h96.fsf@linaro.org> References: <87pmsylywy.fsf@linaro.org> <874ka68h96.fsf@linaro.org> Message-ID: On Mon, Sep 27, 2021 at 3:06 AM Alex Benn?e via Stratos-dev < stratos-dev at op-lists.linaro.org> wrote: > > Marek Marczykowski-G?recki writes: > > > [[PGP Signed Part:Undecided]] > > On Fri, Sep 24, 2021 at 05:02:46PM +0100, Alex Benn?e wrote: > >> Hi, > > > > Hi, > > > >> 2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57]) > >> ??????????????????????????????????????????????????????????????? > >> > >> Currently the foreign memory mapping support only works for dom0 due > >> to reference counting issues. If we are to support backends running in > >> their own domains this will need to get fixed. > >> > >> Estimate: 8w > >> > >> > >> [STR-57] > > > > I'm pretty sure it was discussed before, but I can't find relevant > > (part of) thread right now: does your model assumes the backend (running > > outside of dom0) will gain ability to map (or access in other way) > > _arbitrary_ memory page of a frontend domain? Or worse: any domain? > > The aim is for some DomU's to host backends for other DomU's instead of > all backends being in Dom0. Those backend DomU's would have to be > considered trusted because as you say the default memory model of VirtIO > is to have full access to the frontend domains memory map. > I share Marek's concern. I believe that there are Xen-based systems that will want to run guests using VirtIO devices without extending this level of trust to the backend domains. > > > That is a significant regression in terms of security model Xen > > provides. It would give the backend domain _a lot more_ control over the > > system that it normally has with Xen PV drivers - negating significant > > part of security benefits of using driver domains. > > It's part of the continual trade off between security and speed. For > things like block and network backends there is a penalty if data has to > be bounce buffered before it ends up in the guest address space. > I think we have significant flexibility in being able to modify several layers of the stack here to make this efficient, and it would be beneficial to avoid bounce buffering if possible without sacrificing the ability to enforce isolation. I wonder if there's a viable approach possible with some implementation of a virtual IOMMU (which enforces access control) that would allow a backend to commission I/O on a physical device on behalf of a guest, where the data buffers do not need to be mapped into the backend and so avoid the need for a bounce? > > > So, does the above require frontend agreeing (explicitly or implicitly) > > for accessing specific pages by the backend? There were several > > approaches to that discussed, including using grant tables (as PV > > drivers do), vIOMMU(?), or even drastically different model with no > > shared memory at all (Argo). Can you clarify which (if any) approach > > your attempt of VirtIO on Xen will use? > > There are separate strands of work in Stratos looking at how we could > further secure VirtIO for architectures with distributed backends (e.g. > you may accept the block backend having access to the whole of memory > but an i2c multiplexer has different performance characteristics). > > Currently the only thing we have prototyped is "fat virtqueues" which > Arnd has been working on. Here the only actual shared memory required is > the VirtIO config space and the relevant virt queues. > I think the "fat virtqueues" work is a positive path for investigation and I don't think shared memory between front and backend is hard requirement for those to function: a VirtIO-Argo transport driver would be able to operate with them without shared memory. > > Other approaches have been discussed including using the virtio-iommu to > selectively make areas available to the backend or use memory zoning so > for example network buffers are only allocated in a certain region of > guest physical memory that is shared with the backend. > > > A more general idea: can we collect info on various VirtIO on Xen > > approaches (since there is more than one) in a single place, including: > > - key characteristics, differences > > - who is involved > > - status > > - links to relevant threads, maybe > > > > I'd propose to revive https://wiki.xenproject.org/wiki/Virtio_On_Xen Thanks for the reminder, Marek -- I've just overhauled that page to give an overview of the several approaches in the Xen community to enabling VirtIO on Xen, and have included a first pass at including the content you describe. I'm happy to be involved in improving it further. > > > From the Stratos point of view Xen is a useful proving ground for > general VirtIO experimentation due to being both a type-1 and open > source. Our ultimate aim is have a high degree of code sharing for > backends regardless of the hypervisor choice so a guest can use a VirtIO > device model without having to be locked into KVM. > Thanks, Alex - this context is useful. > > If your technology choice is already fixed with a Xen hypervisor and > portability isn't a concern you might well just stick to the existing > well tested Xen PV interfaces. > I wouldn't quite agree; there are additional reasons beyond portability to be looking at other options than the traditional Xen PV interfaces: eg. an Argo-based interdomain transport for PV devices will enable fine-grained enforcement of Mandatory Access Control over the frontend / backend communication, and will not depend on XenStore which is advantageous for Hyperlaunch / dom0less Xen deployment configurations. thanks, Christopher > > -- > Alex Benn?e > -- > Stratos-dev mailing list > Stratos-dev at op-lists.linaro.org > https://op-lists.linaro.org/mailman/listinfo/stratos-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sstabellini at kernel.org Tue Sep 28 06:26:07 2021 From: sstabellini at kernel.org (Stefano Stabellini) Date: Mon, 27 Sep 2021 23:26:07 -0700 (PDT) Subject: [Rust-VMM] [Stratos-dev] Xen Rust VirtIO demos work breakdown for Project Stratos In-Reply-To: References: <87pmsylywy.fsf@linaro.org> <874ka68h96.fsf@linaro.org> Message-ID: On Mon, 27 Sep 2021, Christopher Clark wrote: > On Mon, Sep 27, 2021 at 3:06 AM Alex Benn?e via Stratos-dev wrote: > > Marek Marczykowski-G?recki writes: > > > [[PGP Signed Part:Undecided]] > > On Fri, Sep 24, 2021 at 05:02:46PM +0100, Alex Benn?e wrote: > >> Hi, > > > > Hi, > > > >> 2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57]) > >> ??????????????????????????????????????????????????????????????? > >> > >>? ?Currently the foreign memory mapping support only works for dom0 due > >>? ?to reference counting issues. If we are to support backends running in > >>? ?their own domains this will need to get fixed. > >> > >>? ?Estimate: 8w > >> > >> > >> [STR-57] > > > > I'm pretty sure it was discussed before, but I can't find relevant > > (part of) thread right now: does your model assumes the backend (running > > outside of dom0) will gain ability to map (or access in other way) > > _arbitrary_ memory page of a frontend domain? Or worse: any domain? > > The aim is for some DomU's to host backends for other DomU's instead of > all backends being in Dom0. Those backend DomU's would have to be > considered trusted because as you say the default memory model of VirtIO > is to have full access to the frontend domains memory map. > > > I share Marek's concern. I believe that there are Xen-based systems that will want to run guests using VirtIO devices without extending > this level of trust to?the backend domains. >From a safety perspective, it would be challenging to deploy a system with privileged backends. From a safety perspective, it would be a lot easier if the backend were unprivileged. This is one of those times where safety and security requirements are actually aligned. From stefanha at gmail.com Tue Sep 28 06:30:26 2021 From: stefanha at gmail.com (Stefan Hajnoczi) Date: Tue, 28 Sep 2021 08:30:26 +0200 Subject: [Rust-VMM] [Stratos-dev] Xen Rust VirtIO demos work breakdown for Project Stratos In-Reply-To: References: <87pmsylywy.fsf@linaro.org> <874ka68h96.fsf@linaro.org> Message-ID: On Tue, Sep 28, 2021 at 7:55 AM Christopher Clark wrote: > > On Mon, Sep 27, 2021 at 3:06 AM Alex Benn?e via Stratos-dev wrote: >> >> >> Marek Marczykowski-G?recki writes: >> >> > [[PGP Signed Part:Undecided]] >> > On Fri, Sep 24, 2021 at 05:02:46PM +0100, Alex Benn?e wrote: >> > That is a significant regression in terms of security model Xen >> > provides. It would give the backend domain _a lot more_ control over the >> > system that it normally has with Xen PV drivers - negating significant >> > part of security benefits of using driver domains. >> >> It's part of the continual trade off between security and speed. For >> things like block and network backends there is a penalty if data has to >> be bounce buffered before it ends up in the guest address space. > > > I think we have significant flexibility in being able to modify several layers of the stack here to make this efficient, and it would be beneficial to avoid bounce buffering if possible without sacrificing the ability to enforce isolation. I wonder if there's a viable approach possible with some implementation of a virtual IOMMU (which enforces access control) that would allow a backend to commission I/O on a physical device on behalf of a guest, where the data buffers do not need to be mapped into the backend and so avoid the need for a bounce? This may not require much modification for Linux guest drivers. Although the VIRTIO drivers traditionally assumed devices can DMA to any memory location, there are already constraints in other situations like Confidential Computing, where swiotlb is used for bounce buffering. Stefan From andrew.cooper3 at citrix.com Tue Sep 28 11:37:35 2021 From: andrew.cooper3 at citrix.com (Andrew Cooper) Date: Tue, 28 Sep 2021 12:37:35 +0100 Subject: [Rust-VMM] Xen Rust VirtIO demos work breakdown for Project Stratos In-Reply-To: <87pmsylywy.fsf@linaro.org> References: <87pmsylywy.fsf@linaro.org> Message-ID: On 24/09/2021 17:02, Alex Benn?e wrote: > 1.1 Upstream an "official" rust crate for Xen ([STR-52]) > ???????????????????????????????????????????????????????? > > To start with we will want an upstream location for future work to be > based upon. The intention is the crate is independent of the version > of Xen it runs on (above the baseline version chosen). This will > entail: > > ? ? agreeing with upstream the name/location for the source Probably github/xen-project/rust-bindings unless anyone has a better suggestion. We almost certainly want a companion repository configured as a hello-world example using the bindings and (cross-)compiled for each backend target. > ? ? documenting the rules for the "stable" hypercall ABI Easy.? There shall be no use of unstable interfaces at all. This is the *only* way to avoid making the bindings dependent on the version of the hypervisor, and will be a major improvement in the Xen ecosystem. Any unstable hypercall wanting to be used shall be stabilised in Xen first, which has been vehemently agreed to at multiple dev summits in the past, and will be a useful way of guiding the stabilisation effort. > ? ? establish an internal interface to elide between ioctl mediated > and direct hypercalls > ? ? ensure the crate is multi-arch and has feature parity for arm64 > > As such we expect the implementation to be standalone, i.e. not > wrapping the existing Xen libraries for mediation. There should be a > close (1-to-1) mapping between the interfaces in the crate and the > eventual hypercall made to the hypervisor. > > Estimate: 4w (elapsed likely longer due to discussion) > > > [STR-52] > > > 1.2 Basic Hypervisor Interactions hypercalls ([STR-53]) > ??????????????????????????????????????????????????????? > > These are the bare minimum hypercalls implemented as both ioctl and > direct calls. These allow for a very basic binary to: > > ? ? console_io - output IO via the Xen console > ? ? domctl stub - basic stub for domain control (different API?) > ? ? sysctl stub - basic stub for system control (different API?) > > The idea would be this provides enough hypercall interface to query > the list of domains and output their status via the xen console. There > is an open question about if the domctl and sysctl hypercalls are way > to go. console_io probably wants implementing as a backend to println!() or the log module, because users of the crate won't want change how they printf()/etc depending on the target. That said, console_io hypercalls only do anything for unprivleged VMs in debug builds of the hypervisor.? This is fine for development, and less fine in production, so logging ought to use the PV console instead (with room for future expansion to an Argo transport). domctl/sysctl are unstable interfaces.? I don't think they'll be necessary for a basic virtio backend, and they will be the most complicated hypercalls to stabilise. > > Estimate: 6w > > > [STR-53] > > > 1.3 [#10] Access to XenStore service ([STR-54]) > ??????????????????????????????????????????????? > > This is a shared configuration storage space accessed via either Unix > sockets (on dom0) or via the Xenbus. This is used to access > configuration information for the domain. > > Is this needed for a backend though? Can everything just be passed > direct on the command line? Currently, if you want a stubdom and you want to instruct it to shut down cleanly, it needs xenstore.? Any stubdom which wants disk or network needs xenstore too. xenbus (the transport) does need to split between ioctl()'s and raw hypercalls.? xenstore (the protocol) could be in the xen crate, or a separate one as it is a piece of higher level functionality. However, we should pay attention to non-xenstore usecases and not paint ourselves into a corner.? Some security usecases would prefer not to use shared memory, and e.g. might consider using an Argo transport instead of the traditional grant-shared page. > > Estimate: 4w > > > [STR-54] > > > 1.4 VirtIO support hypercalls ([STR-55]) > ???????????????????????????????????????? > > These are the hypercalls that need to be implemented to support a > VirtIO backend. This includes the ability to map another guests memory > into the current domains address space, register to receive IOREQ > events when the guest knocks at the doorbell and inject kicks into the > guest. The hypercalls we need to support would be: > > ? ? dmop - device model ops (*_ioreq_server, setirq, nr_vpus) > ? ? foreignmemory - map and unmap guest memory also evtchn, which you need for ioreq notifications. > The DMOP space is larger than what we need for an IOREQ backend so > I've based it just on what arch/arm/dm.c exports which is the subset > introduced for EPAM's virtio work. One thing we will want to be is careful with the interface.? The current DMOPs are a mess of units (particularly frames vs addresses, which will need to change in Xen in due course) as well as range inclusivity/exclusivity. > > Estimate: 12w > > > [STR-55] > > > 2 Xen Hypervisor Support for Stratos ([STR-56]) > ??????????????????????????????????????????????? > > These tasks include tasks needed to support the various different > deployments of Stratos components in Xen. > > > [STR-56] > > 2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57]) > ??????????????????????????????????????????????????????????????? > > Currently the foreign memory mapping support only works for dom0 due > to reference counting issues. If we are to support backends running in > their own domains this will need to get fixed. Oh.? It appears as if some of this was completed in https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=4922caf1de5a08d3eefb4058de1b7f0122c8f76f ~Andrew From olekstysh at gmail.com Tue Sep 28 20:18:53 2021 From: olekstysh at gmail.com (Oleksandr Tyshchenko) Date: Tue, 28 Sep 2021 23:18:53 +0300 Subject: [Rust-VMM] [Stratos-dev] Xen Rust VirtIO demos work breakdown for Project Stratos In-Reply-To: References: <87pmsylywy.fsf@linaro.org> <874ka68h96.fsf@linaro.org> Message-ID: On Tue, Sep 28, 2021 at 9:26 AM Stefano Stabellini wrote: Hi Stefano, all [Sorry for the possible format issues] On Mon, 27 Sep 2021, Christopher Clark wrote: > > On Mon, Sep 27, 2021 at 3:06 AM Alex Benn?e via Stratos-dev < > stratos-dev at op-lists.linaro.org> wrote: > > > > Marek Marczykowski-G?recki > writes: > > > > > [[PGP Signed Part:Undecided]] > > > On Fri, Sep 24, 2021 at 05:02:46PM +0100, Alex Benn?e wrote: > > >> Hi, > > > > > > Hi, > > > > > >> 2.1 Stable ABI for foreignmemory mapping to non-dom0 ([STR-57]) > > >> ??????????????????????????????????????????????????????????????? > > >> > > >> Currently the foreign memory mapping support only works for > dom0 due > > >> to reference counting issues. If we are to support backends > running in > > >> their own domains this will need to get fixed. > > >> > > >> Estimate: 8w > > >> > > >> > > >> [STR-57] > > > > > > I'm pretty sure it was discussed before, but I can't find > relevant > > > (part of) thread right now: does your model assumes the backend > (running > > > outside of dom0) will gain ability to map (or access in other > way) > > > _arbitrary_ memory page of a frontend domain? Or worse: any > domain? > > > > The aim is for some DomU's to host backends for other DomU's > instead of > > all backends being in Dom0. Those backend DomU's would have to be > > considered trusted because as you say the default memory model of > VirtIO > > is to have full access to the frontend domains memory map. > > > > > > I share Marek's concern. I believe that there are Xen-based systems that > will want to run guests using VirtIO devices without extending > > this level of trust to the backend domains. > > From a safety perspective, it would be challenging to deploy a system > with privileged backends. From a safety perspective, it would be a lot > easier if the backend were unprivileged. > > This is one of those times where safety and security requirements are > actually aligned. Well, the foreign memory mapping has one advantage in the context of Virtio use-case which is that Virtio infrastructure in Guest doesn't require any modifications to run on top Xen. The only issue with foreign memory here is that Guest memory actually mapped without its agreement which doesn't perfectly fit into the security model. (although there is one more issue with XSA-300, but I think it will go away sooner or later, at least there are some attempts to eliminate it). While the ability to map any part of Guest memory is not an issue for the backend running in Dom0 (which we usually trust), this will certainly violate Xen security model if we want to run it in other domain, so I completely agree with the existing concern. It was discussed before [1], but I couldn't find any decisions regarding that. As I understand, the one of the possible ideas is to have some entity in Xen (PV IOMMU/virtio-iommu/whatever) that works in protection mode, so it denies all foreign mapping requests from the backend running in DomU by default and only allows requests with mapping which were *implicitly* granted by the Guest before. For example, Xen could be informed which MMIOs hold the queue PFN and notify registers (as it traps the accesses to these registers anyway) and could theoretically parse the frontend request and retrieve descriptors to make a decision which GFNs are actually *allowed*. I can't say for sure (sorry not familiar enough with the topic), but implementing the virtio-iommu device in Xen we could probably avoid Guest modifications at all. Of course, for this to work the Virtio infrastructure in Guest should use DMA API as mentioned in [1]. Would the ?restricted foreign mapping? solution retain the Xen security model and be accepted by the Xen community? I wonder, has someone already looked in this direction, are there any pitfalls here or is this even feasible? [1] https://lore.kernel.org/xen-devel/464e91ec-2b53-2338-43c7-a018087fc7f6 at arm.com/ -- Regards, Oleksandr Tyshchenko -------------- next part -------------- An HTML attachment was scrubbed... URL: