From sebastien.boeuf at intel.com Tue May 7 13:43:50 2019 From: sebastien.boeuf at intel.com (Boeuf, Sebastien) Date: Tue, 7 May 2019 13:43:50 +0000 Subject: [Rust-VMM] [PTG] Meeting Notes Message-ID: <6b801f2afe8d09850e37b2d450ffa96cad29ed00.camel@intel.com> Hi everyone! Here are some notes about the PTG meeting that we had in Denver: Licensing --------- The dual licensing purpose is to make sure that Apache2 will not conflict with GPLv2 licensed projects such as QEMU, which could eventually use rust-vmm. The decision is to move from a dual MIT+Apache2 proposal to a dual 3-clause BSD+Apache2. 3-clause BSD is not incompatible with GPLv2, the same way MIT is not incompatible with GPLv2. But the benefit of having 3-clause BSD instead of MIT is to not conflict with the Crosvm existing code that uses 3-clause BSD. CI -- We currently have Buildkite running on kvm-ioctls crate. Buildkite runs on x86-64 and aarch64. We need some Windows testing, to test the abstraction patches proposed by Crowdstrike folks. Cloudbase will provide the Windows server to run the Windows CI. Proposal about having a dedicated “test” repo in the rust-vmm organization. This would allow every crate to rely on this common “test” repo to centralize the tests. Also, we talked about creating a “dummy” VMM that would be a superset VMM since it would pull every rust-vmm crate. This VMM would allow full integration testing, additionally to the unit tests already running on each crate. The CI should rely on top of tree crate on every pull request, as we want to test the latest master version. Because the CI will ensure that a pull request is merged only if the CI passes, we can be sure that master will never be broken. This is the reason why we can safely run integration tests based on master branch from each crate. Testing will be slightly different on a “release” pull request because it will modify the Cargo.toml file to make sure we’re testing the right version of every crate, before we can give the green light before publishing. Release/Crate publishing ------------------------ How will the crates.io publishing be done? “All at once” vs “each crate can be updated at any time”. Because we want to keep the CI simple, which means it will always test on top of tree, we chose to go with “all at once” solution. Releases are cheap, hence we will bump the versions of every crate every time we want to update one or more crates. We took the decision not to have any stable branches on our repositories. The project is not mature enough to increase the complexity of having one or more stable branches. With the same idea in mind, we took the decision not to have any stable releases to crates.io. How to publish on crates.io with some human gatekeeping? We didn’t take a decision regarding this question, but here are the two discussed approaches: We would create a bot having a crates.io key stored on Github with a set of maintainers that can push the button to let the bot do the work. OR We would have manual publishing from any maintainer from the set of maintainers keys. The concern regarding the bot is about the key which needs to be stored on Github (security concern about having the key being stolen). Crosvm/Firecracker consuming rust-vmm crates -------------------------------------------- Both projects expect to consume crates directly from crates.io, because this means the crates are mature enough to be consumed. The criteria for a crate to be published on crates.io is to have proper documentation, tests, … as documented here: https://github.com/rust-vmm/community/issues/14#issue-408351841 QEMU’s interest in rust-vmm --------------------------- QEMU could benefit from low level parts such as vm-memory crate to make QEMU core parts more secure. The other aspect is about vhost-user backends, as they should be able to consume rust-vmm crates to be implemented in Rust, and be reused by any VMM. vm-sys-utils ------------ The first PR is ready and waiting for more review from Firecracker folks. From Crosvm perspective, the PR is alright, but internal projects (not only about VMM) are using sys-utils too. That’s the reason why it’s not straightforward to replace their sys-utils crate with the vm-sys-utils one, as they will have to write a sys-utils wrapper on top of vm-sys-utils, so that other internal projects can still consume the right set of utility functions. Vhost and vhost-user -------------------- Vhost pull request has been submitted by Gerry and needs to be reviewed. We didn’t spend time reviewing it during the PTG. The vhost-user protocol needs to implement the protocol both for the slave and the master. This way, the master can be consumed from the VMM side, and the slave can be consumed from any vhost-user daemon (interesting for any VMM that could directly reuse the vhost-user backend). In particular, there is some ongoing work from Redhat about writing a virtiofsd vhost-user daemon in Rust. This should be part of the rust-vmm project, pulling the vhost-user-protocol. The remaining question is to determine if the vhost backends, including vhost-user (hence including the protocol itself) should live under a single crate? vm-allocator ------------ We had a discussion about using a vm-allocator crate as a helpful component to decide about memory ranges (MMIO or PIO) for any memory region related to a device. Based on the feedback from Paolo, Alex and Stefan, we need to design carefully this allocator if we want to be able to support PCI BAR programming from the firmware or the guest OS. This means we should be able to handle any sort of PCI reprogramming to update the ranges chosen by the vm-allocator, since this is how the PCI spec is defined. Summary of the priorities ------------------------- We could maintain a list of priorities in an etherpad (or any sort of public/shared document), and at the end of each week, send the list of hot topics and priorities to make sure everyone from the community is on the same page. Code coverage ------------- Which tool should we use to control code coverage? Kcov is one alternative but it seems like it’s not running on aarch64, which is a blocker since the project wants to support multiple CPU architectures. Kcov might be the one being picked up as at first, but we need to investigate for other solutions. Do we need to gate pull requests based on the coverage result? The discussion went both ways on this topic, but I think the solution we agreed upon was to gate based on a code coverage value. Now, an important point about this value, it is not immutable, and based on the manual review from the maintainers, we can lower this value if it makes sense. For instance, if some new piece of code is being added, it does not mean that we have to implement test for the sake of keeping the coverage at the same level. If maintainers, as they are smart people, realize it makes no sense to test this new piece of code, then the decision will be made to reduce the threshold. Linters ------- Part of the regular CI running on every pull request, we want to run multiple linters to maintain a good code quality. Fuzzing ------- We want some fuzzing on the rust-vmm crates. Now the question is to identify which one are the most unsafe crates. For instance, being able to fuzz the virtqueues (part of the virtio crate) should be very interesting to validate their proper behavior. Also, fuzzing vhost-user backends when they will be part of the rust- vmm project will be one very important task if we want to provide secure backends for any VMM that could reuse them. Security process ---------------- At some point, the project will run into some nasty bugs considered as real security threats. In order to anticipate when this day will come, we should define a clear process on how to limit the impact on the rust-vmm users, and to describe how to handle this issue (quick fix, long term plan, etc...). vmm-vcpu -------- After a lot of discussions about the feasibility of having a trait for Vcpu, we came up to the conclusion that without further proof and justification the trait will provide any benefit, we should simply split HyperV and KVM into separate packages. The reason is, we don’t think those two hypervisors have a lot in common, and it might be more efforts to try to find similarities rather than splitting them into distincts pieces. One interesting data point that we would like to look at in the context of this discussion is about the work that Alessandro has been doing to port Firecracker on HyperV. Being able to look at his code might be helpful in understanding the fundamental differences between HyperV and KVM. vm-memory --------- The pull request #10 is splitting the mmap functionality coming from Linux, and it adds the support for the Windows mmap equivalent. The code has been acknowledged by everybody as ready to be merged once the comments about squashing and reworking the commit message will be addressed. vm-device --------- We discussed about the vm-device issue that has been opened for some time now. Some mentioned that it is important to keep the Bus trait generic so that any VMM could still reuse it, adapting some wrappers for devices if necessary. Based on the comments on the issue, it was pretty confusing where things will go with this crate, and that’s why we agreed on waiting for the pull request to be submitted before going further into hypothetical reviews and comments. Samuel will take care of submitting the pull request for this. Community README about rust-vmm goals ------------------------------------- We listed the main points we wanted to mention on the README from the community repository. Andreea took the AR to write the documentation describing the goals and motivation behind the project, based on the defined skeleton. We also mentioned that having a github.io webpage for the project would be a better way to promote the project. We will need to create a dedicated repo for that, as part of the rust-vmm Github organization. We will need to put some effort into putting this webpage together at some point, the first step being to duplicate more or less the content of the README. Thanks, Sebastien From liuj97 at gmail.com Tue May 7 14:28:11 2019 From: liuj97 at gmail.com (Liu Jiang) Date: Tue, 7 May 2019 22:28:11 +0800 Subject: [Rust-VMM] [PTG] Meeting Notes In-Reply-To: <6b801f2afe8d09850e37b2d450ffa96cad29ed00.camel@intel.com> References: <6b801f2afe8d09850e37b2d450ffa96cad29ed00.camel@intel.com> Message-ID: Great thanks for the nice summary! > On May 7, 2019, at 9:43 PM, Boeuf, Sebastien wrote: > > Hi everyone! > > Here are some notes about the PTG meeting that we had in Denver: > > > Licensing > --------- > > The dual licensing purpose is to make sure that Apache2 will not > conflict with GPLv2 licensed projects such as QEMU, which could > eventually use rust-vmm. The decision is to move from a dual > MIT+Apache2 proposal to a dual 3-clause BSD+Apache2. 3-clause BSD is > not incompatible with GPLv2, the same way MIT is not incompatible with > GPLv2. But the benefit of having 3-clause BSD instead of MIT is to not > conflict with the Crosvm existing code that uses 3-clause BSD. > > CI > -- > > We currently have Buildkite running on kvm-ioctls crate. Buildkite runs > on x86-64 and aarch64. We need some Windows testing, to test the > abstraction patches proposed by Crowdstrike folks. Cloudbase will > provide the Windows server to run the Windows CI. > > Proposal about having a dedicated “test” repo in the rust-vmm > organization. This would allow every crate to rely on this common > “test” repo to centralize the tests. > Also, we talked about creating a “dummy” VMM that would be a superset > VMM since it would pull every rust-vmm crate. This VMM would allow full > integration testing, additionally to the unit tests already running on > each crate. > > The CI should rely on top of tree crate on every pull request, as we > want to test the latest master version. Because the CI will ensure that > a pull request is merged only if the CI passes, we can be sure that > master will never be broken. This is the reason why we can safely run > integration tests based on master branch from each crate. > > Testing will be slightly different on a “release” pull request because > it will modify the Cargo.toml file to make sure we’re testing the right > version of every crate, before we can give the green light before > publishing. > > Release/Crate publishing > ------------------------ > > How will the crates.io publishing be done? > > “All at once” vs “each crate can be updated at any time”. Because we > want to keep the CI simple, which means it will always test on top of > tree, we chose to go with “all at once” solution. Releases are cheap, > hence we will bump the versions of every crate every time we want to > update one or more crates. > > We took the decision not to have any stable branches on our > repositories. The project is not mature enough to increase the > complexity of having one or more stable branches. With the same idea in > mind, we took the decision not to have any stable releases to > crates.io. > > How to publish on crates.io with some human gatekeeping? > > We didn’t take a decision regarding this question, but here are the two > discussed approaches: > We would create a bot having a crates.io key stored on Github with a > set of maintainers that can push the button to let the bot do the work. > OR > We would have manual publishing from any maintainer from the set of > maintainers keys. > > The concern regarding the bot is about the key which needs to be stored > on Github (security concern about having the key being stolen). At least some guide docs to publish crates helps a lot:) > > Crosvm/Firecracker consuming rust-vmm crates > -------------------------------------------- > > Both projects expect to consume crates directly from crates.io, because > this means the crates are mature enough to be consumed. > > The criteria for a crate to be published on crates.io is to have proper > documentation, tests, … as documented here: > https://github.com/rust-vmm/community/issues/14#issue-408351841 > > QEMU’s interest in rust-vmm > --------------------------- > > QEMU could benefit from low level parts such as vm-memory crate to make > QEMU core parts more secure. > > The other aspect is about vhost-user backends, as they should be able > to consume rust-vmm crates to be implemented in Rust, and be reused by > any VMM. > > vm-sys-utils > ------------ > > The first PR is ready and waiting for more review from Firecracker > folks. From Crosvm perspective, the PR is alright, but internal > projects (not only about VMM) are using sys-utils too. That’s the > reason why it’s not straightforward to replace their sys-utils crate > with the vm-sys-utils one, as they will have to write a sys-utils > wrapper on top of vm-sys-utils, so that other internal projects can > still consume the right set of utility functions. > > Vhost and vhost-user > -------------------- > > Vhost pull request has been submitted by Gerry and needs to be > reviewed. We didn’t spend time reviewing it during the PTG. > > The vhost-user protocol needs to implement the protocol both for the > slave and the master. This way, the master can be consumed from the VMM > side, and the slave can be consumed from any vhost-user daemon > (interesting for any VMM that could directly reuse the vhost-user > backend). In particular, there is some ongoing work from Redhat about > writing a virtiofsd vhost-user daemon in Rust. This should be part of > the rust-vmm project, pulling the vhost-user-protocol. > > The remaining question is to determine if the vhost backends, including > vhost-user (hence including the protocol itself) should live under a > single crate? Currently vhost, vhost-user-master and vhost-user-slave are defined as features in the same crate, it may help to ease tests and vhost-user message definitions. > > vm-allocator > ------------ > > We had a discussion about using a vm-allocator crate as a helpful > component to decide about memory ranges (MMIO or PIO) for any memory > region related to a device. > > Based on the feedback from Paolo, Alex and Stefan, we need to design > carefully this allocator if we want to be able to support PCI BAR > programming from the firmware or the guest OS. This means we should be > able to handle any sort of PCI reprogramming to update the ranges > chosen by the vm-allocator, since this is how the PCI spec is defined. How about vm-resource-mgr instead of vm-allocator? It could be used to manage guest memory, MMIO, PIO, IRQs etc. Then we could use the vm-resource-mgr to manage guest memory, and the vm-memory crates only provides methods to access guest memory allocated from the resource manager. > > Summary of the priorities > ------------------------- > > We could maintain a list of priorities in an etherpad (or any sort of > public/shared document), and at the end of each week, send the list of > hot topics and priorities to make sure everyone from the community is > on the same page. > > Code coverage > ------------- > > Which tool should we use to control code coverage? > > Kcov is one alternative but it seems like it’s not running on aarch64, > which is a blocker since the project wants to support multiple CPU > architectures. Kcov might be the one being picked up as at first, but > we need to investigate for other solutions. > > Do we need to gate pull requests based on the coverage result? > > The discussion went both ways on this topic, but I think the solution > we agreed upon was to gate based on a code coverage value. Now, an > important point about this value, it is not immutable, and based on the > manual review from the maintainers, we can lower this value if it makes > sense. > For instance, if some new piece of code is being added, it does not > mean that we have to implement test for the sake of keeping the > coverage at the same level. If maintainers, as they are smart people, > realize it makes no sense to test this new piece of code, then the > decision will be made to reduce the threshold. > > Linters > ------- > > Part of the regular CI running on every pull request, we want to run > multiple linters to maintain a good code quality. > > Fuzzing > ------- > > We want some fuzzing on the rust-vmm crates. Now the question is to > identify which one are the most unsafe crates. For instance, being able > to fuzz the virtqueues (part of the virtio crate) should be very > interesting to validate their proper behavior. > > Also, fuzzing vhost-user backends when they will be part of the rust- > vmm project will be one very important task if we want to provide > secure backends for any VMM that could reuse them. > > Security process > ---------------- > > At some point, the project will run into some nasty bugs considered as > real security threats. In order to anticipate when this day will come, > we should define a clear process on how to limit the impact on the > rust-vmm users, and to describe how to handle this issue (quick fix, > long term plan, etc…). Good ideas! > > vmm-vcpu > -------- > > After a lot of discussions about the feasibility of having a trait for > Vcpu, we came up to the conclusion that without further proof and > justification the trait will provide any benefit, we should simply > split HyperV and KVM into separate packages. The reason is, we don’t > think those two hypervisors have a lot in common, and it might be more > efforts to try to find similarities rather than splitting them into > distincts pieces. > > One interesting data point that we would like to look at in the context > of this discussion is about the work that Alessandro has been doing to > port Firecracker on HyperV. Being able to look at his code might be > helpful in understanding the fundamental differences between HyperV and > KVM. > > vm-memory > --------- > > The pull request #10 is splitting the mmap functionality coming from > Linux, and it adds the support for the Windows mmap equivalent. The > code has been acknowledged by everybody as ready to be merged once the > comments about squashing and reworking the commit message will be > addressed. > > vm-device > --------- > > We discussed about the vm-device issue that has been opened for some > time now. Some mentioned that it is important to keep the Bus trait > generic so that any VMM could still reuse it, adapting some wrappers > for devices if necessary. > Based on the comments on the issue, it was pretty confusing where > things will go with this crate, and that’s why we agreed on waiting for > the pull request to be submitted before going further into hypothetical > reviews and comments. > > Samuel will take care of submitting the pull request for this. > > Community README about rust-vmm goals > ------------------------------------- > > We listed the main points we wanted to mention on the README from the > community repository. Andreea took the AR to write the documentation > describing the goals and motivation behind the project, based on the > defined skeleton. > > We also mentioned that having a github.io webpage for the project would > be a better way to promote the project. We will need to create a > dedicated repo for that, as part of the rust-vmm Github organization. > We will need to put some effort into putting this webpage together at > some point, the first step being to duplicate more or less the content > of the README. > > > Thanks, > Sebastien > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm From jenny.mankin at crowdstrike.com Wed May 8 03:26:32 2019 From: jenny.mankin at crowdstrike.com (Jenny Mankin) Date: Wed, 8 May 2019 03:26:32 +0000 Subject: [Rust-VMM] [PTG] Meeting Notes In-Reply-To: <6b801f2afe8d09850e37b2d450ffa96cad29ed00.camel@intel.com> References: <6b801f2afe8d09850e37b2d450ffa96cad29ed00.camel@intel.com> Message-ID: <0db849c1afa04dd3930f092f4cd1b000@casmbox05.crowdstrike.sys> Thanks for the detailed summary of everything discussed at the PTG meetup! Regarding the vmm-vcpu crate, I'd provided more detail on the PR as a reply to Zach's comment, but it's not a very visible location that gets a lot of traffic so I thought I'd solicit feedback here as well. In that thread, I've provided what I think is a technical justification for a vCPU abstraction crate, regardless of its ultimate utility in a full hypervisor-agnostic or Hyper-V implementation of Firecracker or Crosvm. Full explanation below (feel free to reply here, or on the comment thread itself at https://github.com/rust-vmm/vmm-vcpu/pull/3#issuecomment-489174754). I'm curious for the community's thoughts on whether this is sufficient justification for the crate, or whether demonstrable integration into Crosvm or Firecracker is actually a prerequisite for a rust-vmm abstraction crate such as this one (eg, as requested, prove that Firecracker/Crosvm can support different hypervisors). **** Original comment below (thread: https://github.com/rust-vmm/vmm-vcpu/pull/3#issuecomment-489174754) **** You are certainly right in that the differences in the VcpuExit structure (due to the underlying vCPU exits exposed by each hypervisor) make it such that any code making use of the run() function would need to specialize its processing of the exits based on hypervisor. This would need to either be accomplished directly at the layer performing the vCPU.run(), or might be itself abstracted within a higher-level crate. For example, a hypervisor-agnostic VM crate might utilize the trait generic (with VMM-specific references providing implementation of those operations). See, for example, the proposed issue to provide additional abstractions of a VM and a VMM that makes use of abstracted vCPU functionality. Getting crosvm/Firecracker to achieve parity with Hyper-V in addition to KVM is an ambitious goal, and it's true that doing so will require more layers than just swapping in a vCPU implementation of a generic trait. The specifics of what this would look like is something we'd like to look at, and focusing on/POCing the handling of the VcpuExit is a good suggestion. Stepping back from these more-ambitious goals, I think the vCPU crate still offers opportunity for abstraction for common VMM-related operations in higher-level crates that utilize common vCPU functionality. The arch crate comes to mind. In development of the Hyper-V-based libwhp crate, some of the arch functionality had to be duplicated, stripped of KVM-specific objects and APIs, and imported as a separate libwhp-specific crate. The duplication was one of the motivations behind my proposal of the arch crate here for rust-vmm: it naturally lends itself to a hypervisor-agnostic solution that can be easily imported into different VMM projects. And as we discussed a couple weeks ago on the rust-vmm call, since those APIs accept the Vcpu trait generic as an input parameter, there is "zero cost" to the abstraction due to the static dispatch. That is one example where the generic abstraction provided by the vCPU crate benefits other hypervisor-agnostic crates; I think it's reasonable to assume others exist. For example, we are also currently researching and developing a Windows loader crate; this makes use of these same vCPU APIs and abstraction implementations to set up the VM. So independent of our goals to achieve interchangeable VMMs in ambitious projects like crosvm and Firecracker, I think that having generic crates abstracting lower-level functionality provides benefits to smaller-scale projects, like those that might be using rust-vmm crates as building blocks to their own VMMs. -----Original Message----- From: Boeuf, Sebastien Sent: Tuesday, May 7, 2019 6:44 AM To: rust-vmm at lists.opendev.org Subject: [External] [Rust-VMM] [PTG] Meeting Notes Hi everyone! Here are some notes about the PTG meeting that we had in Denver: Licensing --------- The dual licensing purpose is to make sure that Apache2 will not conflict with GPLv2 licensed projects such as QEMU, which could eventually use rust-vmm. The decision is to move from a dual MIT+Apache2 proposal to a dual 3-clause BSD+Apache2. 3-clause BSD is not incompatible with GPLv2, the same way MIT is not incompatible with GPLv2. But the benefit of having 3-clause BSD instead of MIT is to not conflict with the Crosvm existing code that uses 3-clause BSD. CI -- We currently have Buildkite running on kvm-ioctls crate. Buildkite runs on x86-64 and aarch64. We need some Windows testing, to test the abstraction patches proposed by Crowdstrike folks. Cloudbase will provide the Windows server to run the Windows CI. Proposal about having a dedicated “test” repo in the rust-vmm organization. This would allow every crate to rely on this common “test” repo to centralize the tests. Also, we talked about creating a “dummy” VMM that would be a superset VMM since it would pull every rust-vmm crate. This VMM would allow full integration testing, additionally to the unit tests already running on each crate. The CI should rely on top of tree crate on every pull request, as we want to test the latest master version. Because the CI will ensure that a pull request is merged only if the CI passes, we can be sure that master will never be broken. This is the reason why we can safely run integration tests based on master branch from each crate. Testing will be slightly different on a “release” pull request because it will modify the Cargo.toml file to make sure we’re testing the right version of every crate, before we can give the green light before publishing. Release/Crate publishing ------------------------ How will the https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= publishing be done? “All at once” vs “each crate can be updated at any time”. Because we want to keep the CI simple, which means it will always test on top of tree, we chose to go with “all at once” solution. Releases are cheap, hence we will bump the versions of every crate every time we want to update one or more crates. We took the decision not to have any stable branches on our repositories. The project is not mature enough to increase the complexity of having one or more stable branches. With the same idea in mind, we took the decision not to have any stable releases to https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= . How to publish on https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= with some human gatekeeping? We didn’t take a decision regarding this question, but here are the two discussed approaches: We would create a bot having a https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= key stored on Github with a set of maintainers that can push the button to let the bot do the work. OR We would have manual publishing from any maintainer from the set of maintainers keys. The concern regarding the bot is about the key which needs to be stored on Github (security concern about having the key being stolen). Crosvm/Firecracker consuming rust-vmm crates -------------------------------------------- Both projects expect to consume crates directly from https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= , because this means the crates are mature enough to be consumed. The criteria for a crate to be published on https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= is to have proper documentation, tests, … as documented here: https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_rust-2Dvmm_community_issues_14-23issue-2D408351841&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=oVxopXQfyRLJPOf-o9i4jQfagC_VtxjMZoK9Kvrrv8o&e= QEMU’s interest in rust-vmm --------------------------- QEMU could benefit from low level parts such as vm-memory crate to make QEMU core parts more secure. The other aspect is about vhost-user backends, as they should be able to consume rust-vmm crates to be implemented in Rust, and be reused by any VMM. vm-sys-utils ------------ The first PR is ready and waiting for more review from Firecracker folks. From Crosvm perspective, the PR is alright, but internal projects (not only about VMM) are using sys-utils too. That’s the reason why it’s not straightforward to replace their sys-utils crate with the vm-sys-utils one, as they will have to write a sys-utils wrapper on top of vm-sys-utils, so that other internal projects can still consume the right set of utility functions. Vhost and vhost-user -------------------- Vhost pull request has been submitted by Gerry and needs to be reviewed. We didn’t spend time reviewing it during the PTG. The vhost-user protocol needs to implement the protocol both for the slave and the master. This way, the master can be consumed from the VMM side, and the slave can be consumed from any vhost-user daemon (interesting for any VMM that could directly reuse the vhost-user backend). In particular, there is some ongoing work from Redhat about writing a virtiofsd vhost-user daemon in Rust. This should be part of the rust-vmm project, pulling the vhost-user-protocol. The remaining question is to determine if the vhost backends, including vhost-user (hence including the protocol itself) should live under a single crate? vm-allocator ------------ We had a discussion about using a vm-allocator crate as a helpful component to decide about memory ranges (MMIO or PIO) for any memory region related to a device. Based on the feedback from Paolo, Alex and Stefan, we need to design carefully this allocator if we want to be able to support PCI BAR programming from the firmware or the guest OS. This means we should be able to handle any sort of PCI reprogramming to update the ranges chosen by the vm-allocator, since this is how the PCI spec is defined. Summary of the priorities ------------------------- We could maintain a list of priorities in an etherpad (or any sort of public/shared document), and at the end of each week, send the list of hot topics and priorities to make sure everyone from the community is on the same page. Code coverage ------------- Which tool should we use to control code coverage? Kcov is one alternative but it seems like it’s not running on aarch64, which is a blocker since the project wants to support multiple CPU architectures. Kcov might be the one being picked up as at first, but we need to investigate for other solutions. Do we need to gate pull requests based on the coverage result? The discussion went both ways on this topic, but I think the solution we agreed upon was to gate based on a code coverage value. Now, an important point about this value, it is not immutable, and based on the manual review from the maintainers, we can lower this value if it makes sense. For instance, if some new piece of code is being added, it does not mean that we have to implement test for the sake of keeping the coverage at the same level. If maintainers, as they are smart people, realize it makes no sense to test this new piece of code, then the decision will be made to reduce the threshold. Linters ------- Part of the regular CI running on every pull request, we want to run multiple linters to maintain a good code quality. Fuzzing ------- We want some fuzzing on the rust-vmm crates. Now the question is to identify which one are the most unsafe crates. For instance, being able to fuzz the virtqueues (part of the virtio crate) should be very interesting to validate their proper behavior. Also, fuzzing vhost-user backends when they will be part of the rust- vmm project will be one very important task if we want to provide secure backends for any VMM that could reuse them. Security process ---------------- At some point, the project will run into some nasty bugs considered as real security threats. In order to anticipate when this day will come, we should define a clear process on how to limit the impact on the rust-vmm users, and to describe how to handle this issue (quick fix, long term plan, etc...). vmm-vcpu -------- After a lot of discussions about the feasibility of having a trait for Vcpu, we came up to the conclusion that without further proof and justification the trait will provide any benefit, we should simply split HyperV and KVM into separate packages. The reason is, we don’t think those two hypervisors have a lot in common, and it might be more efforts to try to find similarities rather than splitting them into distincts pieces. One interesting data point that we would like to look at in the context of this discussion is about the work that Alessandro has been doing to port Firecracker on HyperV. Being able to look at his code might be helpful in understanding the fundamental differences between HyperV and KVM. vm-memory --------- The pull request #10 is splitting the mmap functionality coming from Linux, and it adds the support for the Windows mmap equivalent. The code has been acknowledged by everybody as ready to be merged once the comments about squashing and reworking the commit message will be addressed. vm-device --------- We discussed about the vm-device issue that has been opened for some time now. Some mentioned that it is important to keep the Bus trait generic so that any VMM could still reuse it, adapting some wrappers for devices if necessary. Based on the comments on the issue, it was pretty confusing where things will go with this crate, and that’s why we agreed on waiting for the pull request to be submitted before going further into hypothetical reviews and comments. Samuel will take care of submitting the pull request for this. Community README about rust-vmm goals ------------------------------------- We listed the main points we wanted to mention on the README from the community repository. Andreea took the AR to write the documentation describing the goals and motivation behind the project, based on the defined skeleton. We also mentioned that having a https://urldefense.proofpoint.com/v2/url?u=http-3A__github.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=wx-ksxvpLnd2CakE0HNAH41C6ykm5kLL7h4f4L9xbpE&e= webpage for the project would be a better way to promote the project. We will need to create a dedicated repo for that, as part of the rust-vmm Github organization. We will need to put some effort into putting this webpage together at some point, the first step being to duplicate more or less the content of the README. Thanks, Sebastien _______________________________________________ Rust-vmm mailing list Rust-vmm at lists.opendev.org https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.opendev.org_cgi-2Dbin_mailman_listinfo_rust-2Dvmm&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=pfMBjiD-BDuzNRCFBIzTEj_HIsw3uz2vV0JfLnvfgtI&e= -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4901 bytes Desc: not available URL: From yi.y.sun at linux.intel.com Wed May 8 03:36:49 2019 From: yi.y.sun at linux.intel.com (Yi Sun) Date: Wed, 8 May 2019 11:36:49 +0800 Subject: [Rust-VMM] [PTG] Meeting Notes In-Reply-To: <0db849c1afa04dd3930f092f4cd1b000@casmbox05.crowdstrike.sys> References: <6b801f2afe8d09850e37b2d450ffa96cad29ed00.camel@intel.com> <0db849c1afa04dd3930f092f4cd1b000@casmbox05.crowdstrike.sys> Message-ID: <20190508033649.GJ5182@yi.y.sun> I also have added comment on the issue. Paste the comment here. **** Original comment below **** I am a little bit confused here. Why do you think different VcpuExit structure will defeat the abstraction benefit? Per my thought, the VcpuExit abstraction should be a set of all hypervisors exit reasons. The concrete Vcpu implementation should handle the exit reasons it cares. The differences are encapsulated into the concrete hypervisor crate, e.g kvm-ioctls. It doesn's affect upper layers. The only disadvantage is there are redundant exit reasons for specific hypervisor. But it should not be a big case I think. Because we statically set concrete hypervisor in upper layer, there should be zero cost with Hypervisor/Vm/Vcpu abstractions and make the upper layer codes be generic and elegant. This brings much benefit to make the upper layer crates be platform portable. So far, we only have KVM/HyperV requirements. But there may be more and more hypervisors in the future. On 19-05-08 03:26:32, Jenny Mankin wrote: > Thanks for the detailed summary of everything discussed at the PTG meetup! > > Regarding the vmm-vcpu crate, I'd provided more detail on the PR as a reply to Zach's comment, but it's not a very visible location that gets a lot of traffic so I thought I'd solicit feedback here as well. > > In that thread, I've provided what I think is a technical justification for a vCPU abstraction crate, regardless of its ultimate utility in a full hypervisor-agnostic or Hyper-V implementation of Firecracker or Crosvm. Full explanation below (feel free to reply here, or on the comment thread itself at https://github.com/rust-vmm/vmm-vcpu/pull/3#issuecomment-489174754). > > I'm curious for the community's thoughts on whether this is sufficient justification for the crate, or whether demonstrable integration into Crosvm or Firecracker is actually a prerequisite for a rust-vmm abstraction crate such as this one (eg, as requested, prove that Firecracker/Crosvm can support different hypervisors). > > > **** Original comment below (thread: https://github.com/rust-vmm/vmm-vcpu/pull/3#issuecomment-489174754) **** > > You are certainly right in that the differences in the VcpuExit structure (due to the underlying vCPU exits exposed by each hypervisor) make it such that any code making use of the run() function would need to specialize its processing of the exits based on hypervisor. This would need to either be accomplished directly at the layer performing the vCPU.run(), or might be itself abstracted within a higher-level crate. For example, a hypervisor-agnostic VM crate might utilize the trait generic (with VMM-specific references providing implementation of those operations). See, for example, the proposed issue to provide additional abstractions of a VM and a VMM that makes use of abstracted vCPU functionality. > > Getting crosvm/Firecracker to achieve parity with Hyper-V in addition to KVM is an ambitious goal, and it's true that doing so will require more layers than just swapping in a vCPU implementation of a generic trait. The specifics of what this would look like is something we'd like to look at, and focusing on/POCing the handling of the VcpuExit is a good suggestion. > > Stepping back from these more-ambitious goals, I think the vCPU crate still offers opportunity for abstraction for common VMM-related operations in higher-level crates that utilize common vCPU functionality. The arch crate comes to mind. In development of the Hyper-V-based libwhp crate, some of the arch functionality had to be duplicated, stripped of KVM-specific objects and APIs, and imported as a separate libwhp-specific crate. The duplication was one of the motivations behind my proposal of the arch crate here for rust-vmm: it naturally lends itself to a hypervisor-agnostic solution that can be easily imported into different VMM projects. And as we discussed a couple weeks ago on the rust-vmm call, since those APIs accept the Vcpu trait generic as an input parameter, there is "zero cost" to the abstraction due to the static dispatch. > > That is one example where the generic abstraction provided by the vCPU crate benefits other hypervisor-agnostic crates; I think it's reasonable to assume others exist. For example, we are also currently researching and developing a Windows loader crate; this makes use of these same vCPU APIs and abstraction implementations to set up the VM. > > So independent of our goals to achieve interchangeable VMMs in ambitious projects like crosvm and Firecracker, I think that having generic crates abstracting lower-level functionality provides benefits to smaller-scale projects, like those that might be using rust-vmm crates as building blocks to their own VMMs. > > -----Original Message----- > From: Boeuf, Sebastien > Sent: Tuesday, May 7, 2019 6:44 AM > To: rust-vmm at lists.opendev.org > Subject: [External] [Rust-VMM] [PTG] Meeting Notes > > Hi everyone! > > Here are some notes about the PTG meeting that we had in Denver: > > > Licensing > --------- > > The dual licensing purpose is to make sure that Apache2 will not conflict with GPLv2 licensed projects such as QEMU, which could eventually use rust-vmm. The decision is to move from a dual > MIT+Apache2 proposal to a dual 3-clause BSD+Apache2. 3-clause BSD is > not incompatible with GPLv2, the same way MIT is not incompatible with GPLv2. But the benefit of having 3-clause BSD instead of MIT is to not conflict with the Crosvm existing code that uses 3-clause BSD. > > CI > -- > > We currently have Buildkite running on kvm-ioctls crate. Buildkite runs on x86-64 and aarch64. We need some Windows testing, to test the abstraction patches proposed by Crowdstrike folks. Cloudbase will provide the Windows server to run the Windows CI. > > Proposal about having a dedicated “test” repo in the rust-vmm organization. This would allow every crate to rely on this common “test” repo to centralize the tests. > Also, we talked about creating a “dummy” VMM that would be a superset VMM since it would pull every rust-vmm crate. This VMM would allow full integration testing, additionally to the unit tests already running on each crate. > > The CI should rely on top of tree crate on every pull request, as we want to test the latest master version. Because the CI will ensure that a pull request is merged only if the CI passes, we can be sure that master will never be broken. This is the reason why we can safely run integration tests based on master branch from each crate. > > Testing will be slightly different on a “release” pull request because it will modify the Cargo.toml file to make sure we’re testing the right version of every crate, before we can give the green light before publishing. > > Release/Crate publishing > ------------------------ > > How will the https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= publishing be done? > > “All at once” vs “each crate can be updated at any time”. Because we want to keep the CI simple, which means it will always test on top of tree, we chose to go with “all at once” solution. Releases are cheap, hence we will bump the versions of every crate every time we want to update one or more crates. > > We took the decision not to have any stable branches on our repositories. The project is not mature enough to increase the complexity of having one or more stable branches. With the same idea in mind, we took the decision not to have any stable releases to https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= . > > How to publish on https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= with some human gatekeeping? > > We didn’t take a decision regarding this question, but here are the two discussed approaches: > We would create a bot having a https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= key stored on Github with a set of maintainers that can push the button to let the bot do the work. > OR > We would have manual publishing from any maintainer from the set of maintainers keys. > > The concern regarding the bot is about the key which needs to be stored on Github (security concern about having the key being stolen). > > Crosvm/Firecracker consuming rust-vmm crates > -------------------------------------------- > > Both projects expect to consume crates directly from https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= , because this means the crates are mature enough to be consumed. > > The criteria for a crate to be published on https://urldefense.proofpoint.com/v2/url?u=http-3A__crates.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=xhrEdtnHf3MFtE3B_lIWGO8-CDq9nQiA_ZvXN-QicL4&e= is to have proper documentation, tests, … as documented here: > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_rust-2Dvmm_community_issues_14-23issue-2D408351841&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=oVxopXQfyRLJPOf-o9i4jQfagC_VtxjMZoK9Kvrrv8o&e= > > QEMU’s interest in rust-vmm > --------------------------- > > QEMU could benefit from low level parts such as vm-memory crate to make QEMU core parts more secure. > > The other aspect is about vhost-user backends, as they should be able to consume rust-vmm crates to be implemented in Rust, and be reused by any VMM. > > vm-sys-utils > ------------ > > The first PR is ready and waiting for more review from Firecracker folks. From Crosvm perspective, the PR is alright, but internal projects (not only about VMM) are using sys-utils too. That’s the reason why it’s not straightforward to replace their sys-utils crate with the vm-sys-utils one, as they will have to write a sys-utils wrapper on top of vm-sys-utils, so that other internal projects can still consume the right set of utility functions. > > Vhost and vhost-user > -------------------- > > Vhost pull request has been submitted by Gerry and needs to be reviewed. We didn’t spend time reviewing it during the PTG. > > The vhost-user protocol needs to implement the protocol both for the slave and the master. This way, the master can be consumed from the VMM side, and the slave can be consumed from any vhost-user daemon (interesting for any VMM that could directly reuse the vhost-user backend). In particular, there is some ongoing work from Redhat about writing a virtiofsd vhost-user daemon in Rust. This should be part of the rust-vmm project, pulling the vhost-user-protocol. > > The remaining question is to determine if the vhost backends, including vhost-user (hence including the protocol itself) should live under a single crate? > > vm-allocator > ------------ > > We had a discussion about using a vm-allocator crate as a helpful component to decide about memory ranges (MMIO or PIO) for any memory region related to a device. > > Based on the feedback from Paolo, Alex and Stefan, we need to design carefully this allocator if we want to be able to support PCI BAR programming from the firmware or the guest OS. This means we should be able to handle any sort of PCI reprogramming to update the ranges chosen by the vm-allocator, since this is how the PCI spec is defined. > > Summary of the priorities > ------------------------- > > We could maintain a list of priorities in an etherpad (or any sort of public/shared document), and at the end of each week, send the list of hot topics and priorities to make sure everyone from the community is on the same page. > > Code coverage > ------------- > > Which tool should we use to control code coverage? > > Kcov is one alternative but it seems like it’s not running on aarch64, which is a blocker since the project wants to support multiple CPU architectures. Kcov might be the one being picked up as at first, but we need to investigate for other solutions. > > Do we need to gate pull requests based on the coverage result? > > The discussion went both ways on this topic, but I think the solution we agreed upon was to gate based on a code coverage value. Now, an important point about this value, it is not immutable, and based on the manual review from the maintainers, we can lower this value if it makes sense. > For instance, if some new piece of code is being added, it does not mean that we have to implement test for the sake of keeping the coverage at the same level. If maintainers, as they are smart people, realize it makes no sense to test this new piece of code, then the decision will be made to reduce the threshold. > > Linters > ------- > > Part of the regular CI running on every pull request, we want to run multiple linters to maintain a good code quality. > > Fuzzing > ------- > > We want some fuzzing on the rust-vmm crates. Now the question is to identify which one are the most unsafe crates. For instance, being able to fuzz the virtqueues (part of the virtio crate) should be very interesting to validate their proper behavior. > > Also, fuzzing vhost-user backends when they will be part of the rust- vmm project will be one very important task if we want to provide secure backends for any VMM that could reuse them. > > Security process > ---------------- > > At some point, the project will run into some nasty bugs considered as real security threats. In order to anticipate when this day will come, we should define a clear process on how to limit the impact on the rust-vmm users, and to describe how to handle this issue (quick fix, long term plan, etc...). > > vmm-vcpu > -------- > > After a lot of discussions about the feasibility of having a trait for Vcpu, we came up to the conclusion that without further proof and justification the trait will provide any benefit, we should simply split HyperV and KVM into separate packages. The reason is, we don’t think those two hypervisors have a lot in common, and it might be more efforts to try to find similarities rather than splitting them into distincts pieces. > > One interesting data point that we would like to look at in the context of this discussion is about the work that Alessandro has been doing to port Firecracker on HyperV. Being able to look at his code might be helpful in understanding the fundamental differences between HyperV and KVM. > > vm-memory > --------- > > The pull request #10 is splitting the mmap functionality coming from Linux, and it adds the support for the Windows mmap equivalent. The code has been acknowledged by everybody as ready to be merged once the comments about squashing and reworking the commit message will be addressed. > > vm-device > --------- > > We discussed about the vm-device issue that has been opened for some time now. Some mentioned that it is important to keep the Bus trait generic so that any VMM could still reuse it, adapting some wrappers for devices if necessary. > Based on the comments on the issue, it was pretty confusing where things will go with this crate, and that’s why we agreed on waiting for the pull request to be submitted before going further into hypothetical reviews and comments. > > Samuel will take care of submitting the pull request for this. > > Community README about rust-vmm goals > ------------------------------------- > > We listed the main points we wanted to mention on the README from the community repository. Andreea took the AR to write the documentation describing the goals and motivation behind the project, based on the defined skeleton. > > We also mentioned that having a https://urldefense.proofpoint.com/v2/url?u=http-3A__github.io&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=wx-ksxvpLnd2CakE0HNAH41C6ykm5kLL7h4f4L9xbpE&e= webpage for the project would be a better way to promote the project. We will need to create a dedicated repo for that, as part of the rust-vmm Github organization. > We will need to put some effort into putting this webpage together at some point, the first step being to duplicate more or less the content of the README. > > > Thanks, > Sebastien > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.opendev.org_cgi-2Dbin_mailman_listinfo_rust-2Dvmm&d=DwIGaQ&c=08AGY6txKsvMOP6lYkHQpPMRA1U6kqhAwGa8-0QCg3M&r=PoK8upqsrqMY9Q21QxWB0ENVVKaX285kXk_XNb3b0rA&m=GEXtlh6N-Kldupyx-IfpIsG-kbyGhS1DKMLTO8yRLao&s=pfMBjiD-BDuzNRCFBIzTEj_HIsw3uz2vV0JfLnvfgtI&e= > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm From slp at redhat.com Wed May 8 12:18:18 2019 From: slp at redhat.com (Sergio Lopez) Date: Wed, 08 May 2019 14:18:18 +0200 Subject: [Rust-VMM] Including a virtio-bindings crate in rust-vmm Message-ID: <87d0ktp2b9.fsf@redhat.com> Hi, I think it'd be useful having a crate providing virtio bindgen-generated bindings, similar to Firecracker's virtio_gen. I wrote one that provides the same functionality, but with multiple versions mapped as features, as kvm-bindings does: https://git.sinrega.org/slp/virtio-bindings Do you think we could make this a project under rust-vmm's umbrella? Thanks, Sergio. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From pbonzini at redhat.com Wed May 8 12:54:14 2019 From: pbonzini at redhat.com (Paolo Bonzini) Date: Wed, 8 May 2019 14:54:14 +0200 Subject: [Rust-VMM] Including a virtio-bindings crate in rust-vmm In-Reply-To: <87d0ktp2b9.fsf@redhat.com> References: <87d0ktp2b9.fsf@redhat.com> Message-ID: <5c235521-2be5-ae72-6672-ba543ad78604@redhat.com> On 08/05/19 07:18, Sergio Lopez wrote: > Hi, > > I think it'd be useful having a crate providing virtio bindgen-generated > bindings, similar to Firecracker's virtio_gen. I wrote one that provides > the same functionality, but with multiple versions mapped as features, > as kvm-bindings does: > > https://git.sinrega.org/slp/virtio-bindings > > Do you think we could make this a project under rust-vmm's umbrella? > > Thanks, Yes, I think so! However, what is the reason to have anything but the last version? Headers from a newer kernel should be backwards-compatible with code written for an older kernel. Thanks, Paolo From slp at redhat.com Wed May 8 14:17:07 2019 From: slp at redhat.com (Sergio Lopez) Date: Wed, 08 May 2019 16:17:07 +0200 Subject: [Rust-VMM] Including a virtio-bindings crate in rust-vmm In-Reply-To: <5c235521-2be5-ae72-6672-ba543ad78604@redhat.com> References: <87d0ktp2b9.fsf@redhat.com> <5c235521-2be5-ae72-6672-ba543ad78604@redhat.com> Message-ID: <87bm0dowt8.fsf@redhat.com> Paolo Bonzini writes: > On 08/05/19 07:18, Sergio Lopez wrote: >> Hi, >> >> I think it'd be useful having a crate providing virtio bindgen-generated >> bindings, similar to Firecracker's virtio_gen. I wrote one that provides >> the same functionality, but with multiple versions mapped as features, >> as kvm-bindings does: >> >> https://git.sinrega.org/slp/virtio-bindings >> >> Do you think we could make this a project under rust-vmm's umbrella? >> >> Thanks, > > Yes, I think so! However, what is the reason to have anything but the > last version? Headers from a newer kernel should be > backwards-compatible with code written for an older kernel. The main reason is for that is allowing crate users to be able to do strict size checks on structs. As an example, virtio_blk_config was extended from 4.14 to 5.0 with new fields. This structure may come as the payload a VhostUserConfig message, and while you are able to just use the latest version and accept any payload the same size or smaller, I think some users may want to be more strict and just allow the expected size. Given that the cost of maintaining the bindings is very small, I think that's a use case we can afford supporting (and I volunteer for doing so :-). Thanks, Sergio. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From liuj97 at gmail.com Wed May 8 14:38:43 2019 From: liuj97 at gmail.com (Liu Jiang) Date: Wed, 8 May 2019 22:38:43 +0800 Subject: [Rust-VMM] Including a virtio-bindings crate in rust-vmm In-Reply-To: <87bm0dowt8.fsf@redhat.com> References: <87d0ktp2b9.fsf@redhat.com> <5c235521-2be5-ae72-6672-ba543ad78604@redhat.com> <87bm0dowt8.fsf@redhat.com> Message-ID: <7A9EAC11-A5BF-4AE6-BDE2-C07321AF845D@gmail.com> > On May 8, 2019, at 10:17 PM, Sergio Lopez > wrote: > > > Paolo Bonzini > writes: > >> On 08/05/19 07:18, Sergio Lopez wrote: >>> Hi, >>> >>> I think it'd be useful having a crate providing virtio bindgen-generated >>> bindings, similar to Firecracker's virtio_gen. I wrote one that provides >>> the same functionality, but with multiple versions mapped as features, >>> as kvm-bindings does: >>> >>> https://git.sinrega.org/slp/virtio-bindings >>> >>> Do you think we could make this a project under rust-vmm's umbrella? >>> >>> Thanks, >> >> Yes, I think so! However, what is the reason to have anything but the >> last version? Headers from a newer kernel should be >> backwards-compatible with code written for an older kernel. > > The main reason is for that is allowing crate users to be able to do > strict size checks on structs. > > As an example, virtio_blk_config was extended from 4.14 to 5.0 with new > fields. This structure may come as the payload a VhostUserConfig > message, and while you are able to just use the latest version and > accept any payload the same size or smaller, I think some users may want > to be more strict and just allow the expected size. > > Given that the cost of maintaining the bindings is very small, I think > that's a use case we can afford supporting (and I volunteer for doing so > :-). A hypervisor may support multiple kernel versions. So how about defining multiple data structure for different kernel versions? BTW, the auto-generated code has some useless code with poor readability. It would be appreciated to manually maintain a beatified version:) > > Thanks, > Sergio. > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien.boeuf at intel.com Thu May 9 00:02:43 2019 From: sebastien.boeuf at intel.com (Boeuf, Sebastien) Date: Thu, 9 May 2019 00:02:43 +0000 Subject: [Rust-VMM] Including a virtio-bindings crate in rust-vmm In-Reply-To: <7A9EAC11-A5BF-4AE6-BDE2-C07321AF845D@gmail.com> References: <87d0ktp2b9.fsf@redhat.com> <5c235521-2be5-ae72-6672-ba543ad78604@redhat.com> <87bm0dowt8.fsf@redhat.com> <7A9EAC11-A5BF-4AE6-BDE2-C07321AF845D@gmail.com> Message-ID: <232eba949cb159e53bb5f840261689816427e94c.camel@intel.com> On Wed, 2019-05-08 at 22:38 +0800, Liu Jiang wrote: On May 8, 2019, at 10:17 PM, Sergio Lopez > wrote: Paolo Bonzini > writes: On 08/05/19 07:18, Sergio Lopez wrote: Hi, I think it'd be useful having a crate providing virtio bindgen-generated bindings, similar to Firecracker's virtio_gen. I wrote one that provides the same functionality, but with multiple versions mapped as features, as kvm-bindings does: https://git.sinrega.org/slp/virtio-bindings Do you think we could make this a project under rust-vmm's umbrella? Thanks, Yes, I think so! However, what is the reason to have anything but the last version? Headers from a newer kernel should be backwards-compatible with code written for an older kernel. The main reason is for that is allowing crate users to be able to do strict size checks on structs. As an example, virtio_blk_config was extended from 4.14 to 5.0 with new fields. This structure may come as the payload a VhostUserConfig message, and while you are able to just use the latest version and accept any payload the same size or smaller, I think some users may want to be more strict and just allow the expected size. Given that the cost of maintaining the bindings is very small, I think that's a use case we can afford supporting (and I volunteer for doing so :-). A hypervisor may support multiple kernel versions. So how about defining multiple data structure for different kernel versions? BTW, the auto-generated code has some useless code with poor readability. It would be appreciated to manually maintain a beatified version:) The only concern with "beautified" version is that it needs more human maintenance. I'm not against it, but we need to find a real benefit to this. Something we didn't talk about is the fact that we had some discussions a few weeks ago about putting those bindings into the virtio crate itself. I'm glad to see that everybody agrees (or don't disagree) with putting them inside their own crate, the same way it's already done for the kvm-bindings. Crates are cheap, so I feel it's better if we can decouple things. And one global comments is that we should follow the same pattern for any auto-generated binding we might add in the future, for the sake of being consistent. Thanks, Sebastien Thanks, Sergio. _______________________________________________ Rust-vmm mailing list Rust-vmm at lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm _______________________________________________ Rust-vmm mailing list Rust-vmm at lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm -------------- next part -------------- An HTML attachment was scrubbed... URL: From fandree at amazon.com Thu May 9 08:14:26 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Thu, 9 May 2019 08:14:26 +0000 Subject: [Rust-VMM] Buildkite CI - multiple pipelines? Message-ID: <1557389666010.85534@amazon.com> Hey everyone, During PTG we discussed about having a centralized place for the buildkite pipeline. The buildkite pipeline is the definition of steps (or commands) that run as part of the CI. Since all crates will have to pass common tests like `cargo test`, `cargo build` and `cargo fmt` the idea would be to have these steps in a centralized repository so we do not have to duplicate the pipeline in all repositories. I created an issue for this purpose [1]. The pipeline would live in a `buildkite-ci` repository or similar. Maybe rust-vmm-ci is a better option so we can have the coverage test in the same repository as well, but that is a different discussion. Each GitHub repository would then use `buildkite-ci` as a git submodule. To better explain this, let's take an example. Let's say we have `buildkite-ci.git` and in it we have a `.buildkite` directory. In the buildkite directory we have `pipeline.yml` which defines steps/commands like cargo build and cargo test. Now, say we want to enable the buildkite CI for the vm-memory repository. To do that, we would need to do the following: 1. Add `buildlite-ci` as a git submodule of vm-memory. 2. Add a new vm-memory-ci pipeline using the Buildkite web interface. 3. Specify which pipeline to use to run the tests. This is done via a command called "pipeline upload" which takes as a parameter a path that is relative to the root of the repository. You can read more about it here [2]. The command in this case will look something like: buildkite-agent pipeline upload buildkite-ci/.buidkite/pipeline.yml With this design you get 2 things: 1. You don't have to duplicate the pipeline for all repositories 2. You can still have custom integration tests for each repository Now, after some thought I started to believe that the better choice is to actually provide multiple pipelines instead of one. My proposal would be to have one pipeline for each platform and operating system we want to support. And let me tell you why: 1. Some crates (like vm-memory) will also have support for Windows so you want to the integration tests to pass on windows as well. But you can't have it in the default pipeline because some crates cannot have it (like kvm-ioctls). 2. Ideally we would have *most* of the crates work on arm and x86. That means that in our pipelines we would have same tests on x86 and arm as well. But some tools are not available on arm, so the steps are not exactly the same. Examples of things that aren't available (yet) on arm are clippy and kcov. Now, when we start development we might not want to add support from both arm and x86 from the first PR because it might turn out to be very complex and hard to review. So we can start with x86 and only use the x86 pipeline till support for arm is added. With our current configuration of crates, platforms and operating systems we would end up having 3 pipelines: - x86_linux_pipeline.yml (used by kvm-ioctls, vm-memory, vmm-sys-utils) - x86_windows_pipeline.yml (used by vm-memory) - arm_linux_pipeline.yml (used by kvm-ioctls, vmm-sys-utils) What do you all think? Regards, Andreea [1] https://github.com/rust-vmm/community/issues/56 [2] https://buildkite.com/docs/pipelines/defining-steps Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From slp at redhat.com Thu May 9 18:45:01 2019 From: slp at redhat.com (Sergio Lopez) Date: Thu, 09 May 2019 20:45:01 +0200 Subject: [Rust-VMM] Including a virtio-bindings crate in rust-vmm In-Reply-To: <232eba949cb159e53bb5f840261689816427e94c.camel@intel.com> References: <87d0ktp2b9.fsf@redhat.com> <5c235521-2be5-ae72-6672-ba543ad78604@redhat.com> <87bm0dowt8.fsf@redhat.com> <7A9EAC11-A5BF-4AE6-BDE2-C07321AF845D@gmail.com> <232eba949cb159e53bb5f840261689816427e94c.camel@intel.com> Message-ID: <877eaztqky.fsf@redhat.com> Boeuf, Sebastien writes: > On Wed, 2019-05-08 at 22:38 +0800, Liu Jiang wrote: > > On May 8, 2019, at 10:17 PM, Sergio Lopez wrote: > > Paolo Bonzini writes: > > On 08/05/19 07:18, Sergio Lopez wrote: > > Hi, > > I think it'd be useful having a crate providing virtio bindgen-generated > bindings, similar to Firecracker's virtio_gen. I wrote one that provides > the same functionality, but with multiple versions mapped as features, > as kvm-bindings does: > > https://git.sinrega.org/slp/virtio-bindings > > Do you think we could make this a project under rust-vmm's umbrella? > > Thanks, > > Yes, I think so! However, what is the reason to have anything but the > last version? Headers from a newer kernel should be > backwards-compatible with code written for an older kernel. > > The main reason is for that is allowing crate users to be able to do > strict size checks on structs. > > As an example, virtio_blk_config was extended from 4.14 to 5.0 with new > fields. This structure may come as the payload a VhostUserConfig > message, and while you are able to just use the latest version and > accept any payload the same size or smaller, I think some users may want > to be more strict and just allow the expected size. > > Given that the cost of maintaining the bindings is very small, I think > that's a use case we can afford supporting (and I volunteer for doing so > :-). > > A hypervisor may support multiple kernel versions. So how about defining multiple data structure for different kernel versions? > BTW, the auto-generated code has some useless code with poor readability. > It would be appreciated to manually maintain a beatified version:) > > The only concern with "beautified" version is that it needs more human maintenance. I'm not against it, but we need to find a real benefit to this. I share the same opinion. The purpose of these bindings is getting access to well-known and documented interfaces. The original sources can be used as a reference. Thanks, Sergio. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From sebastien.boeuf at intel.com Thu May 9 21:52:47 2019 From: sebastien.boeuf at intel.com (Boeuf, Sebastien) Date: Thu, 9 May 2019 21:52:47 +0000 Subject: [Rust-VMM] Buildkite CI - multiple pipelines? In-Reply-To: <1557389666010.85534@amazon.com> References: <1557389666010.85534@amazon.com> Message-ID: Hi Andreea, On Thu, 2019-05-09 at 08:14 +0000, Florescu, Andreea wrote: Hey everyone, During PTG we discussed about having a centralized place for the buildkite pipeline. The buildkite pipeline is the definition of steps (or commands) that run as part of the CI. Since all crates will have to pass common tests like `cargo test`, `cargo build` and `cargo fmt` the idea would be to have these steps in a centralized repository so we do not have to duplicate the pipeline in all repositories. I created an issue for this purpose [1]. The pipeline would live in a `buildkite-ci` repository or similar. Maybe rust-vmm-ci is a better option so we can have the coverage test in the same repository as well, but that is a different discussion. Having a centralized repo for all the testing also simplifies where to host the common integration tests that we might run to validate several crates together (dummy VMM we mentioned during PTG). Centralized repo does not have to contain a single pipeline as you mentioned later in this email. About the naming, I would definitely prefer something generic like "rust-vmm-ci" or "ci" or "tests". Each GitHub repository would then use `buildkite-ci` as a git submodule. To better explain this, let's take an example. Let's say we have `buildkite-ci.git` and in it we have a `.buildkite` directory. In the buildkite directory we have `pipeline.yml` which defines steps/commands like cargo build and cargo test. Now, say we want to enable the buildkite CI for the vm-memory repository. To do that, we would need to do the following: 1. Add `buildlite-ci` as a git submodule of vm-memory. 2. Add a new vm-memory-ci pipeline using the Buildkite web interface. 3. Specify which pipeline to use to run the tests. This is done via a command called "pipeline upload" which takes as a parameter a path that is relative to the root of the repository. You can read more about it here [2]. The command in this case will look something like: buildkite-agent pipeline upload buildkite-ci/.buidkite/pipeline.yml With this design you get 2 things: 1. You don't have to duplicate the pipeline for all repositories 2. You can still have custom integration tests for each repository This looks good, but I also wanted to mention that using git submodule is not the only way to achieve this. You could have a dedicated "ci" directory (for every crate) in which you could have the same script that pulls the content of the "ci" repo before to call into one pipeline or another (that's the part specific to each crate). The only concern I have with git submodule is that we will have to pin it to a specific version on each and every crate, which means that when we will add more tests or modify them on the "ci" repo, we will have to update the submodule version on every crate. On the other hand, without submodule, if we introduce some breaking changes in the tests (should not happen), we will have to update each and every crate, otherwise CI will be broken. Now, after some thought I started to believe that the better choice is to actually provide multiple pipelines instead of one. My proposal would be to have one pipeline for each platform and operating system we want to support. And let me tell you why: 1. Some crates (like vm-memory) will also have support for Windows so you want to the integration tests to pass on windows as well. But you can't have it in the default pipeline because some crates cannot have it (like kvm-ioctls). 2. Ideally we would have *most* of the crates work on arm and x86. That means that in our pipelines we would have same tests on x86 and arm as well. But some tools are not available on arm, so the steps are not exactly the same. Examples of things that aren't available (yet) on arm are clippy and kcov. Now, when we start development we might not want to add support from both arm and x86 from the first PR because it might turn out to be very complex and hard to review. So we can start with x86 and only use the x86 pipeline till support for arm is added. With our current configuration of crates, platforms and operating systems we would end up having 3 pipelines: - x86_linux_pipeline.yml (used by kvm-ioctls, vm-memory, vmm-sys-utils) - x86_windows_pipeline.yml (used by vm-memory) - arm_linux_pipeline.yml (used by kvm-ioctls, vmm-sys-utils) Don't we want vm-memory also to run the arm_linux_pipeline.yml? What do you all think? Sounds good, and we should make sure to parallelize those pipelines when crates needs to test more than one pipeline. Waiting for the CI on a PR is the worst thing, let's make sure to keep it efficient :) Thanks, Sebastien Regards, Andreea [1] https://github.com/rust-vmm/community/issues/56 [2] https://buildkite.com/docs/pipelines/defining-steps Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. _______________________________________________ Rust-vmm mailing list Rust-vmm at lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm -------------- next part -------------- An HTML attachment was scrubbed... URL: From lpetrut at cloudbasesolutions.com Fri May 10 06:53:14 2019 From: lpetrut at cloudbasesolutions.com (Lucian Petrut) Date: Fri, 10 May 2019 06:53:14 +0000 Subject: [Rust-VMM] Buildkite CI - multiple pipelines? Message-ID: <64050966FCE0B948BCE2B28DB6E0B7D557A1AFA5@CBSEX1.cloudbase.local> Hi, About parallelizing the jobs: each buildkite agent runs one step/job at a time, so I think we should run multiple agents per host (e.g. by passing “—spawn ”). One idea would be to just run one agent per host cpu core. Also, if we go with multiple pipelines per project, I guess we’ll have github webhooks for each of them, which will then be triggered individually. Regards, Lucian Petrut From: Boeuf, Sebastien Sent: Friday, May 10, 2019 12:53 AM To: fandree at amazon.com; rust-vmm at lists.opendev.org Subject: Re: [Rust-VMM] Buildkite CI - multiple pipelines? Hi Andreea, On Thu, 2019-05-09 at 08:14 +0000, Florescu, Andreea wrote: Hey everyone, During PTG we discussed about having a centralized place for the buildkite pipeline. The buildkite pipeline is the definition of steps (or commands) that run as part of the CI. Since all crates will have to pass common tests like `cargo test`, `cargo build` and `cargo fmt` the idea would be to have these steps in a centralized repository so we do not have to duplicate the pipeline in all repositories. I created an issue for this purpose [1]. The pipeline would live in a `buildkite-ci` repository or similar. Maybe rust-vmm-ci is a better option so we can have the coverage test in the same repository as well, but that is a different discussion. Having a centralized repo for all the testing also simplifies where to host the common integration tests that we might run to validate several crates together (dummy VMM we mentioned during PTG). Centralized repo does not have to contain a single pipeline as you mentioned later in this email. About the naming, I would definitely prefer something generic like "rust-vmm-ci" or "ci" or "tests". Each GitHub repository would then use `buildkite-ci` as a git submodule. To better explain this, let's take an example. Let's say we have `buildkite-ci.git` and in it we have a `.buildkite` directory. In the buildkite directory we have `pipeline.yml` which defines steps/commands like cargo build and cargo test. Now, say we want to enable the buildkite CI for the vm-memory repository. To do that, we would need to do the following: 1. Add `buildlite-ci` as a git submodule of vm-memory. 2. Add a new vm-memory-ci pipeline using the Buildkite web interface. 3. Specify which pipeline to use to run the tests. This is done via a command called "pipeline upload" which takes as a parameter a path that is relative to the root of the repository. You can read more about it here [2]. The command in this case will look something like: buildkite-agent pipeline upload buildkite-ci/.buidkite/pipeline.yml With this design you get 2 things: 1. You don't have to duplicate the pipeline for all repositories 2. You can still have custom integration tests for each repository This looks good, but I also wanted to mention that using git submodule is not the only way to achieve this. You could have a dedicated "ci" directory (for every crate) in which you could have the same script that pulls the content of the "ci" repo before to call into one pipeline or another (that's the part specific to each crate). The only concern I have with git submodule is that we will have to pin it to a specific version on each and every crate, which means that when we will add more tests or modify them on the "ci" repo, we will have to update the submodule version on every crate. On the other hand, without submodule, if we introduce some breaking changes in the tests (should not happen), we will have to update each and every crate, otherwise CI will be broken. Now, after some thought I started to believe that the better choice is to actually provide multiple pipelines instead of one. My proposal would be to have one pipeline for each platform and operating system we want to support. And let me tell you why: 1. Some crates (like vm-memory) will also have support for Windows so you want to the integration tests to pass on windows as well. But you can't have it in the default pipeline because some crates cannot have it (like kvm-ioctls). 2. Ideally we would have *most* of the crates work on arm and x86. That means that in our pipelines we would have same tests on x86 and arm as well. But some tools are not available on arm, so the steps are not exactly the same. Examples of things that aren't available (yet) on arm are clippy and kcov. Now, when we start development we might not want to add support from both arm and x86 from the first PR because it might turn out to be very complex and hard to review. So we can start with x86 and only use the x86 pipeline till support for arm is added. With our current configuration of crates, platforms and operating systems we would end up having 3 pipelines: - x86_linux_pipeline.yml (used by kvm-ioctls, vm-memory, vmm-sys-utils) - x86_windows_pipeline.yml (used by vm-memory) - arm_linux_pipeline.yml (used by kvm-ioctls, vmm-sys-utils) Don't we want vm-memory also to run the arm_linux_pipeline.yml? What do you all think? Sounds good, and we should make sure to parallelize those pipelines when crates needs to test more than one pipeline. Waiting for the CI on a PR is the worst thing, let's make sure to keep it efficient :) Thanks, Sebastien Regards, Andreea [1] https://github.com/rust-vmm/community/issues/56 [2] https://buildkite.com/docs/pipelines/defining-steps Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. _______________________________________________ Rust-vmm mailing list Rust-vmm at lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm -------------- next part -------------- An HTML attachment was scrubbed... URL: From fandree at amazon.com Fri May 10 08:59:06 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Fri, 10 May 2019 08:59:06 +0000 Subject: [Rust-VMM] Buildkite CI - multiple pipelines? In-Reply-To: <64050966FCE0B948BCE2B28DB6E0B7D557A1AFA5@CBSEX1.cloudbase.local> References: <64050966FCE0B948BCE2B28DB6E0B7D557A1AFA5@CBSEX1.cloudbase.local> Message-ID: <1557478745387.95485@amazon.com> Hey everyone, I will add all your points to the GitHub issue so we can continue the discussions there. I find it easier as we can link issues to PRs and we can have the full history. https://github.com/rust-vmm/community/issues/56 Regards, Andreea ________________________________ From: Lucian Petrut Sent: Friday, May 10, 2019 9:53 AM To: Boeuf, Sebastien; Florescu, Andreea; rust-vmm at lists.opendev.org Subject: RE: [Rust-VMM] Buildkite CI - multiple pipelines? Hi, About parallelizing the jobs: each buildkite agent runs one step/job at a time, so I think we should run multiple agents per host (e.g. by passing “—spawn ”). One idea would be to just run one agent per host cpu core. Also, if we go with multiple pipelines per project, I guess we’ll have github webhooks for each of them, which will then be triggered individually. Regards, Lucian Petrut From: Boeuf, Sebastien Sent: Friday, May 10, 2019 12:53 AM To: fandree at amazon.com; rust-vmm at lists.opendev.org Subject: Re: [Rust-VMM] Buildkite CI - multiple pipelines? Hi Andreea, On Thu, 2019-05-09 at 08:14 +0000, Florescu, Andreea wrote: Hey everyone, During PTG we discussed about having a centralized place for the buildkite pipeline. The buildkite pipeline is the definition of steps (or commands) that run as part of the CI. Since all crates will have to pass common tests like `cargo test`, `cargo build` and `cargo fmt` the idea would be to have these steps in a centralized repository so we do not have to duplicate the pipeline in all repositories. I created an issue for this purpose [1]. The pipeline would live in a `buildkite-ci` repository or similar. Maybe rust-vmm-ci is a better option so we can have the coverage test in the same repository as well, but that is a different discussion. Having a centralized repo for all the testing also simplifies where to host the common integration tests that we might run to validate several crates together (dummy VMM we mentioned during PTG). Centralized repo does not have to contain a single pipeline as you mentioned later in this email. About the naming, I would definitely prefer something generic like "rust-vmm-ci" or "ci" or "tests". Each GitHub repository would then use `buildkite-ci` as a git submodule. To better explain this, let's take an example. Let's say we have `buildkite-ci.git` and in it we have a `.buildkite` directory. In the buildkite directory we have `pipeline.yml` which defines steps/commands like cargo build and cargo test. Now, say we want to enable the buildkite CI for the vm-memory repository. To do that, we would need to do the following: 1. Add `buildlite-ci` as a git submodule of vm-memory. 2. Add a new vm-memory-ci pipeline using the Buildkite web interface. 3. Specify which pipeline to use to run the tests. This is done via a command called "pipeline upload" which takes as a parameter a path that is relative to the root of the repository. You can read more about it here [2]. The command in this case will look something like: buildkite-agent pipeline upload buildkite-ci/.buidkite/pipeline.yml With this design you get 2 things: 1. You don't have to duplicate the pipeline for all repositories 2. You can still have custom integration tests for each repository This looks good, but I also wanted to mention that using git submodule is not the only way to achieve this. You could have a dedicated "ci" directory (for every crate) in which you could have the same script that pulls the content of the "ci" repo before to call into one pipeline or another (that's the part specific to each crate). The only concern I have with git submodule is that we will have to pin it to a specific version on each and every crate, which means that when we will add more tests or modify them on the "ci" repo, we will have to update the submodule version on every crate. On the other hand, without submodule, if we introduce some breaking changes in the tests (should not happen), we will have to update each and every crate, otherwise CI will be broken. Now, after some thought I started to believe that the better choice is to actually provide multiple pipelines instead of one. My proposal would be to have one pipeline for each platform and operating system we want to support. And let me tell you why: 1. Some crates (like vm-memory) will also have support for Windows so you want to the integration tests to pass on windows as well. But you can't have it in the default pipeline because some crates cannot have it (like kvm-ioctls). 2. Ideally we would have *most* of the crates work on arm and x86. That means that in our pipelines we would have same tests on x86 and arm as well. But some tools are not available on arm, so the steps are not exactly the same. Examples of things that aren't available (yet) on arm are clippy and kcov. Now, when we start development we might not want to add support from both arm and x86 from the first PR because it might turn out to be very complex and hard to review. So we can start with x86 and only use the x86 pipeline till support for arm is added. With our current configuration of crates, platforms and operating systems we would end up having 3 pipelines: - x86_linux_pipeline.yml (used by kvm-ioctls, vm-memory, vmm-sys-utils) - x86_windows_pipeline.yml (used by vm-memory) - arm_linux_pipeline.yml (used by kvm-ioctls, vmm-sys-utils) Don't we want vm-memory also to run the arm_linux_pipeline.yml? What do you all think? Sounds good, and we should make sure to parallelize those pipelines when crates needs to test more than one pipeline. Waiting for the CI on a PR is the worst thing, let's make sure to keep it efficient :) Thanks, Sebastien Regards, Andreea [1] https://github.com/rust-vmm/community/issues/56 [2] https://buildkite.com/docs/pipelines/defining-steps Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. _______________________________________________ Rust-vmm mailing list Rust-vmm at lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From samuel.ortiz at intel.com Tue May 14 16:06:16 2019 From: samuel.ortiz at intel.com (Samuel Ortiz) Date: Tue, 14 May 2019 18:06:16 +0200 Subject: [Rust-VMM] Including a virtio-bindings crate in rust-vmm In-Reply-To: <87d0ktp2b9.fsf@redhat.com> References: <87d0ktp2b9.fsf@redhat.com> Message-ID: <20190514160616.GD4338@caravaggio> Hi Sergio, On Wed, May 08, 2019 at 02:18:18PM +0200, Sergio Lopez wrote: > Hi, > > I think it'd be useful having a crate providing virtio bindgen-generated > bindings, similar to Firecracker's virtio_gen. I wrote one that provides > the same functionality, but with multiple versions mapped as features, > as kvm-bindings does: > > https://git.sinrega.org/slp/virtio-bindings > > Do you think we could make this a project under rust-vmm's umbrella? It would make a lot of sense, yes. I created a virtio-bindings repo: https://github.com/rust-vmm/virtio-bindings Please send a PR with your changes there. Cheers, Samuel. --------------------------------------------------------------------- Intel Corporation SAS (French simplified joint stock company) Registered headquarters: "Les Montalets"- 2, rue de Paris, 92196 Meudon Cedex, France Registration Number: 302 456 199 R.C.S. NANTERRE Capital: 4,572,000 Euros This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From samuel.ortiz at intel.com Tue May 14 16:13:09 2019 From: samuel.ortiz at intel.com (Ortiz, Samuel) Date: Tue, 14 May 2019 16:13:09 +0000 Subject: [Rust-VMM] Invitation: Rust-VMM Bi-Weekly Community Mtg @ Every 2 weeks from 10am to 11am on Wednesday from Wed Feb 6 to Wed Dec 18 (CST) (rust-vmm@lists.opendev.org) In-Reply-To: References: Message-ID: All, At the Denver rust-vmm PTG, we discussed about cancelling this meeting when there is no agenda. The next one is in 24 hours, so I'm doing a call for agenda: Please speak up if you want to have something discussed at tomorrow's meeting. As a reminder, the meeting agenda is here and available for anyone to amend: https://etherpad.openstack.org/p/rust_vmm_2019_biweekly_calls Cheers, Samuel. ________________________________ From: claire at openstack.org [claire at openstack.org] Sent: Tuesday, January 29, 2019 4:48 PM Required: claire at openstack.org; rust-vmm at lists.opendev.org Subject: [Rust-VMM] Invitation: Rust-VMM Bi-Weekly Community Mtg @ Every 2 weeks from 10am to 11am on Wednesday from Wed Feb 6 to Wed Dec 18 (CST) (rust-vmm at lists.opendev.org) When: Wednesday, May 15, 2019 5:00 PM-6:00 PM. Where: more details » Rust-VMM Bi-Weekly Community Mtg When Every 2 weeks from 10am to 11am on Wednesday from Wed Feb 6 to Wed Dec 18 Central Time - Chicago Calendar rust-vmm at lists.opendev.org Who • claire at openstack.org - organizer • rust-vmm at lists.opendev.org Notes / Agenda: https://etherpad.openstack.org/p/rust_vmm_2019_biweekly_calls Join Zoom Meeting https://zoom.us/j/181523033 One tap mobile +16699006833,,181523033# US (San Jose) +16468769923,,181523033# US (New York) Dial by your location +1 669 900 6833 US (San Jose) +1 646 876 9923 US (New York) Meeting ID: 181 523 033 Find your local number: https://zoom.us/u/abOe7d3zx9 Going (rust-vmm at lists.opendev.org)? All events in this series: Yes - Maybe - No more options » Invitation from Google Calendar You are receiving this courtesy email at the account rust-vmm at lists.opendev.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to modify your RSVP response. Learn More. --------------------------------------------------------------------- Intel Corporation SAS (French simplified joint stock company) Registered headquarters: "Les Montalets"- 2, rue de Paris, 92196 Meudon Cedex, France Registration Number: 302 456 199 R.C.S. NANTERRE Capital: 4,572,000 Euros This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From robert.bradford at intel.com Tue May 14 23:17:51 2019 From: robert.bradford at intel.com (Rob Bradford) Date: Wed, 15 May 2019 00:17:51 +0100 Subject: [Rust-VMM] Announcing Cloud Hypervisor project Message-ID: <1557875871.3036.22.camel@intel.com> Hi all, Today we released our effort on building a hypervisor based on Rust-VMM components. https://github.com/intel/cloud-hypervisor Cloud Hypervisor makes extensive use of Rust-VMM crates, both released and under development. Thank you to everyone in this community for all your hard work and particularly the crosvm and Firecracker developers whose code provides the basis for most of these crates and is essential in this project. The README has a great explanation of the goals of the project but one that I want to highlight here is the goal to provide a vehicle for the integration of Rust-VMM crates under development and so we are looking forward to many new Rust-VMM crates appearing. Cheers, Rob From xu at hyper.sh Wed May 15 02:22:30 2019 From: xu at hyper.sh (Xu Wang) Date: Wed, 15 May 2019 10:22:30 +0800 Subject: [Rust-VMM] Announcing Cloud Hypervisor project In-Reply-To: <1557875871.3036.22.camel@intel.com> References: <1557875871.3036.22.camel@intel.com> Message-ID: Cool! Thanks, Rob. I think this will help in the integration of rust-vmm. Looking forward to integrating it with Kata Containers. Cheers, Xu On Wed, May 15, 2019 at 7:18 AM Rob Bradford wrote: > Hi all, > > Today we released our effort on building a hypervisor based on Rust-VMM > components. > > https://github.com/intel/cloud-hypervisor > > Cloud Hypervisor makes extensive use of Rust-VMM crates, both released > and under development. Thank you to everyone in this community for all > your hard work and particularly the crosvm and Firecracker developers > whose code provides the basis for most of these crates and is essential > in this project. > > The README has a great explanation of the goals of the project but one > that I want to highlight here is the goal to provide a vehicle for the > integration of Rust-VMM crates under development and so we are looking > forward to many new Rust-VMM crates appearing. > > Cheers, > > Rob > > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm > -- -- Xu Wang CTO & Cofounder, Hyper github/twitter/wechat: @gnawux http://hyper.sh Hyper_: Make VM run like container -------------- next part -------------- An HTML attachment was scrubbed... URL: From slp at redhat.com Wed May 15 11:13:56 2019 From: slp at redhat.com (Sergio Lopez) Date: Wed, 15 May 2019 13:13:56 +0200 Subject: [Rust-VMM] Including a virtio-bindings crate in rust-vmm In-Reply-To: <20190514160616.GD4338@caravaggio> References: <87d0ktp2b9.fsf@redhat.com> <20190514160616.GD4338@caravaggio> Message-ID: <878sv8gec8.fsf@redhat.com> Samuel Ortiz writes: > Hi Sergio, > > On Wed, May 08, 2019 at 02:18:18PM +0200, Sergio Lopez wrote: >> Hi, >> >> I think it'd be useful having a crate providing virtio bindgen-generated >> bindings, similar to Firecracker's virtio_gen. I wrote one that provides >> the same functionality, but with multiple versions mapped as features, >> as kvm-bindings does: >> >> https://git.sinrega.org/slp/virtio-bindings >> >> Do you think we could make this a project under rust-vmm's umbrella? > It would make a lot of sense, yes. > I created a virtio-bindings repo: > https://github.com/rust-vmm/virtio-bindings > > Please send a PR with your changes there. Done, thanks! Sergio. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From fandree at amazon.com Sun May 19 10:54:05 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Sun, 19 May 2019 10:54:05 +0000 Subject: [Rust-VMM] Including a virtio-bindings crate in rust-vmm In-Reply-To: <878sv8gec8.fsf@redhat.com> References: <87d0ktp2b9.fsf@redhat.com> <20190514160616.GD4338@caravaggio>,<878sv8gec8.fsf@redhat.com> Message-ID: <1558263241146.12722@amazon.com> Hey everyone, Since we are going to have the bindings as a separate repository and also since we agreed (I think) on supporting bindings from multiple kernel versions, we should probably re-export the kernel versions from which the bindings are generated so that people using kvm-ioctls or vm-virtio can also specify the kernel version of the bindings. In kvm-bindings you can select the Linux kernel version using Rust features. By default the latest available bindings are used: kvm-bindings = "0.1" -> use the bindings generated from Linux Kernel 4.20. This will always point to the latest kernel version for which we have bindings. kvm-bindings = { version = "0.1", features = ["kvm_v4_14_0"]} -> use the bindings generated from Linux Kernel version 0.14.0 But in kvm-ioctls the versions are ignored and we are always using the latest generated bindings. I honestly didn't completely understand the usecase for using bindings from older kernel versions, but since this seems to be the case, should we allow choosing the kernel version from higher level crates like kvm-ioctls and vm-virtio? How are people currently working around this problem? One thing to keep in mind is that this approach can become hard to maintain because all crates using kvm-ioctls and vm-virtio will also have to re-export the versions if said crates are not so high-level that no other crate is using them. Regards, Andreea ________________________________________ From: Sergio Lopez Sent: Wednesday, May 15, 2019 2:13 PM To: Samuel Ortiz Cc: rust-vmm at lists.opendev.org Subject: Re: [Rust-VMM] Including a virtio-bindings crate in rust-vmm Samuel Ortiz writes: > Hi Sergio, > > On Wed, May 08, 2019 at 02:18:18PM +0200, Sergio Lopez wrote: >> Hi, >> >> I think it'd be useful having a crate providing virtio bindgen-generated >> bindings, similar to Firecracker's virtio_gen. I wrote one that provides >> the same functionality, but with multiple versions mapped as features, >> as kvm-bindings does: >> >> https://git.sinrega.org/slp/virtio-bindings >> >> Do you think we could make this a project under rust-vmm's umbrella? > It would make a lot of sense, yes. > I created a virtio-bindings repo: > https://github.com/rust-vmm/virtio-bindings > > Please send a PR with your changes there. Done, thanks! Sergio. Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. From fandree at amazon.com Mon May 20 12:14:02 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Mon, 20 May 2019 12:14:02 +0000 Subject: [Rust-VMM] rust-vmm review status Message-ID: <1558354442557.62488@amazon.com> Hey everyone, Guess who's behind on PR reviews? We aaaare!!! There are some issues and PRs that would need our attention. Well actually now that I got to go through all of them: there are **a lot** of issues and PRs. Please assign yourselves as reviewers so we can move these forward. If you have a PR and you don't receive any reviews in a timely fashion you can just bug individuals to review your code. That worked for me in the past. PRs needing review: 1. Adding initial code for vm-virtio: https://github.com/rust-vmm/vm-virtio/pull/1 2. volatile_memory: add VolatileArrayRef in vm-memory: https://github.com/rust-vmm/vm-memory/pull/19 3. Adding virtio-bindings: https://github.com/rust-vmm/virtio-bindings/pull/1 4. Initial code for the linux-loader: https://github.com/rust-vmm/linux-loader/pull/2 5. Fix warnings on kvm-ioctls: https://github.com/rust-vmm/kvm-ioctls/pull/37 6. More aarch64 specific testing and code on kvm-ioctls: https://github.com/rust-vmm/kvm-ioctls/pull/33 7. Container that is running our current CI with buildkite: https://github.com/rust-vmm/rust-vmm-container/pull/1 8. Initial vhost code: https://github.com/rust-vmm/vhost/pull/2 PRs needing attention from submitter: 1. vm-memory: https://github.com/rust-vmm/vm-memory/pull/9 Issues for creating new crates: 1. ACPI: https://github.com/rust-vmm/community/issues/23 2. Hypervisor-firmware: https://github.com/rust-vmm/community/issues/29 3. cpu-model: https://github.com/rust-vmm/community/issues/31 4. non-volatile memory (nvdimm): https://github.com/rust-vmm/community/issues/38 5. arch: https://github.com/rust-vmm/community/issues/41 6. Hypervisor crate: https://github.com/rust-vmm/community/issues/50 7. Replace kvm-ioctls with kvm: https://github.com/rust-vmm/community/issues/55 8. rust-vmm-ci: https://github.com/rust-vmm/community/issues/56 9. vfio: https://github.com/rust-vmm/community/issues/57 Also, I missed the last sync call, can someone brief me in regarding the vmm-vcpu crate? Regards, Andreea Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fandree at amazon.com Mon May 20 14:50:26 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Mon, 20 May 2019 14:50:26 +0000 Subject: [Rust-VMM] Rust OSDev Message-ID: <1558363825869.15077@amazon.com> FYI I just discovered the Rust OSDev organization on GitHub and from a first look it seems there is some overlap with what we are trying to do. I am specifically referring to the following repositories: - ACPI: https://github.com/rust-osdev/acpi - x86_64 looks like it can have some shared functionality with what we want to do in our arch crate: https://github.com/rust-osdev/x86_64 There might be parts of other repositories that could be interesting for us as well. I will take some time to look at what they are doing and see if we can share something so we don't end up re-invent the wheel. I will share what I discover with you. Regards, Andreea Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien.boeuf at intel.com Mon May 20 14:55:53 2019 From: sebastien.boeuf at intel.com (Boeuf, Sebastien) Date: Mon, 20 May 2019 14:55:53 +0000 Subject: [Rust-VMM] Rust OSDev In-Reply-To: <1558363825869.15077@amazon.com> References: <1558363825869.15077@amazon.com> Message-ID: <72d376d719965766d0b06e7b6b897460ad9a7715.camel@intel.com> Oh thanks for sharing that. I agree it's always better if we can avoid wasting time writing already existing code! Sebastien On Mon, 2019-05-20 at 14:50 +0000, Florescu, Andreea wrote: FYI I just discovered the Rust OSDev organization on GitHub and from a first look it seems there is some overlap with what we are trying to do. I am specifically referring to the following repositories: - ACPI: https://github.com/rust-osdev/acpi - x86_64 looks like it can have some shared functionality with what we want to do in our arch crate: https://github.com/rust-osdev/x86_64 There might be parts of other repositories that could be interesting for us as well. I will take some time to look at what they are doing and see if we can share something so we don't end up re-invent the wheel. I will share what I discover with you. Regards, Andreea Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. _______________________________________________ Rust-vmm mailing list Rust-vmm at lists.opendev.org http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm -------------- next part -------------- An HTML attachment was scrubbed... URL: From dgilbert at redhat.com Mon May 20 15:01:55 2019 From: dgilbert at redhat.com (Dr. David Alan Gilbert) Date: Mon, 20 May 2019 16:01:55 +0100 Subject: [Rust-VMM] Rust OSDev In-Reply-To: <1558363825869.15077@amazon.com> References: <1558363825869.15077@amazon.com> Message-ID: <20190520150154.GG2726@work-vm> * Florescu, Andreea (fandree at amazon.com) wrote: > FYI I just discovered the Rust OSDev organization on GitHub and from a first look it seems there is some overlap with what we are trying to do. > > I am specifically referring to the following repositories: > > - ACPI: https://github.com/rust-osdev/acpi > > - x86_64 looks like it can have some shared functionality with what we want to do in our arch crate: https://github.com/rust-osdev/x86_64 Yes I know a colelague had asked for some changes in there to help in his kvm code (as part of enarx) e.g. https://github.com/rust-osdev/x86_64/issues/72 Dave > > There might be parts of other repositories that could be interesting for us as well. > > > I will take some time to look at what they are doing and see if we can share something so we don't end up re-invent the wheel. I will share what I discover with you. > > > Regards, > > Andreea > > > > Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm -- Dr. David Alan Gilbert / dgilbert at redhat.com / Manchester, UK From yang.zhong at intel.com Tue May 21 03:15:03 2019 From: yang.zhong at intel.com (Zhong, Yang) Date: Tue, 21 May 2019 03:15:03 +0000 Subject: [Rust-VMM] Rust OSDev In-Reply-To: <1558363825869.15077@amazon.com> References: <1558363825869.15077@amazon.com> Message-ID: <7A85DF989CAE8F42902CF7B31A7D94A1487A8408@shsmsx102.ccr.corp.intel.com> Hello Andreea, If I understand correctly, Rust-osdev mainly focus on kernel/OS side and they want to use rust to implement kernel. As for ACPI mentioned in below link, their ACPI only implemented ACPI parser for ACPI tables in rust kernel. The ACPI in rust-vmm is to implement different ACPI tables for guest kernel to parse. Regards, Yang From: Florescu, Andreea [mailto:fandree at amazon.com] Sent: Monday, May 20, 2019 10:50 PM To: rust-vmm ML Subject: [Rust-VMM] Rust OSDev FYI I just discovered the Rust OSDev organization on GitHub and from a first look it seems there is some overlap with what we are trying to do. I am specifically referring to the following repositories: - ACPI: https://github.com/rust-osdev/acpi - x86_64 looks like it can have some shared functionality with what we want to do in our arch crate: https://github.com/rust-osdev/x86_64 There might be parts of other repositories that could be interesting for us as well. I will take some time to look at what they are doing and see if we can share something so we don't end up re-invent the wheel. I will share what I discover with you. Regards, Andreea Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbonzini at redhat.com Wed May 22 10:34:45 2019 From: pbonzini at redhat.com (Paolo Bonzini) Date: Wed, 22 May 2019 12:34:45 +0200 Subject: [Rust-VMM] Rust OSDev In-Reply-To: <7A85DF989CAE8F42902CF7B31A7D94A1487A8408@shsmsx102.ccr.corp.intel.com> References: <1558363825869.15077@amazon.com> <7A85DF989CAE8F42902CF7B31A7D94A1487A8408@shsmsx102.ccr.corp.intel.com> Message-ID: <79963b05-f329-a3fb-84f1-87fb62bcfee0@redhat.com> On 21/05/19 05:15, Zhong, Yang wrote: > Hello Andreea, > > If I understand correctly, Rust-osdev mainly focus on kernel/OS side and > they want to use rust to implement kernel. > > As for ACPI mentioned in below link, their ACPI only implemented ACPI > parser for ACPI tables in rust kernel. > > The ACPI in rust-vmm is to implement different ACPI tables for guest > kernel to parse. It can still reuse the struct definitions, and also the rust-osdev code could be used in unit tests. Paolo From fandree at amazon.com Mon May 27 13:25:18 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Mon, 27 May 2019 13:25:18 +0000 Subject: [Rust-VMM] Next rust-vmm meet-up Message-ID: <1558963516923.22009@amazon.com> Hey everyone, About a month ago a few of us met during PTG [1] where we discussed about the design of various rust-vmm components, testing as well as some community topics. You can check the full meeting notes in the email thread [2]. While we were there we also talked about setting up another meetup this year. One apparently popular option was to locate this event in Bucharest, during the autumn. Therefore, on behalf of the local AWS Dev Center, I am happy to extend an invitation to all of you, for a 3 days hacking session. AWS will provide the workspace for all attendees (either in or around our offices). My personal preference would be to have a meetup more focused on the coding. Ideally we would get together and write some code for new components, review the code, discuss about designing crates and similar activities. I am open to other suggestions as well, but I think we could benefit from a more hands-on session. Can you please take 1 minute to complete the form [3] so we can get a rough idea of how many people are able to join and also come up with a week when most of us can attend? We can settled on the actual dates once we know roughly what week is the best option for everyone. [1] https://www.openstack.org/ptg/ [2] http://lists.opendev.org/pipermail/rust-vmm/2019-May/000200.html [3] https://docs.google.com/forms/d/1rw89Bdigh7QxHXY3vvucemrA5oNOmFC59F4DBtqMLEA Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From samuel.ortiz at intel.com Tue May 28 09:49:37 2019 From: samuel.ortiz at intel.com (Samuel Ortiz) Date: Tue, 28 May 2019 11:49:37 +0200 Subject: [Rust-VMM] Organization membership Message-ID: <20190528094937.GG31838@caravaggio> All, The community README has a "Become a member of rust-vmm" section [1] describing how to open an issue for being added as a member of the rust-vmm organization. This seems to imply that organization members have special privileges or even worse, that one needs to be an organization member to contribute to the project. We want to remove that perception because a) members do not have any special permissions on the organization and b) any registered github user can contribute to the project. So we want to remove that section and the associated issue template. Any objections? Cheers, Samuel. [1] https://github.com/rust-vmm/community#become-a-member-of-rust-vmm --------------------------------------------------------------------- Intel Corporation SAS (French simplified joint stock company) Registered headquarters: "Les Montalets"- 2, rue de Paris, 92196 Meudon Cedex, France Registration Number: 302 456 199 R.C.S. NANTERRE Capital: 4,572,000 Euros This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. From sebastien.boeuf at intel.com Tue May 28 14:49:30 2019 From: sebastien.boeuf at intel.com (Boeuf, Sebastien) Date: Tue, 28 May 2019 14:49:30 +0000 Subject: [Rust-VMM] Organization membership In-Reply-To: <20190528094937.GG31838@caravaggio> References: <20190528094937.GG31838@caravaggio> Message-ID: On Tue, 2019-05-28 at 11:49 +0200, Samuel Ortiz wrote: > All, > > The community README has a "Become a member of rust-vmm" section [1] > describing how to open an issue for being added as a member of the > rust-vmm organization. This seems to imply that organization members > have special privileges or even worse, that one needs to be an > organization member to contribute to the project. > > We want to remove that perception because a) members do not have any > special permissions on the organization and b) any registered github > user can contribute to the project. > So we want to remove that section and the associated issue template. > Any > objections? Sounds good to me! Thanks, Sebastien > > Cheers, > Samuel. > > [1] https://github.com/rust-vmm/community#become-a-member-of-rust-vmm > --------------------------------------------------------------------- > Intel Corporation SAS (French simplified joint stock company) > Registered headquarters: "Les Montalets"- 2, rue de Paris, > 92196 Meudon Cedex, France > Registration Number: 302 456 199 R.C.S. NANTERRE > Capital: 4,572,000 Euros > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > > > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm From claire at openstack.org Tue May 28 17:57:10 2019 From: claire at openstack.org (Claire Massey) Date: Tue, 28 May 2019 12:57:10 -0500 Subject: [Rust-VMM] Reminder, May 29 Meeting Message-ID: <476695E0-F2BE-43D0-8D54-014C1A011B4F@openstack.org> Hi everyone, It’s been a while - so friendly reminder - there will be a Rust-VMM community call tomorrow, Wednesday, May 29 at 8:00am PST. Please add agenda topics here: https://etherpad.openstack.org/p/rust_vmm_2019_biweekly_calls . Dial in info: https://zoom.us/j/181523033 One tap mobile +16699006833,,181523033# US (San Jose) +16468769923,,181523033# US (New York) Dial by your location +1 669 900 6833 US (San Jose) +1 646 876 9923 US (New York) Meeting ID: 181 523 033 Find your local number: https://zoom.us/u/abOe7d3zx9 Thanks, Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: From chao.p.peng at intel.com Wed May 29 06:58:34 2019 From: chao.p.peng at intel.com (Peng, Chao P) Date: Wed, 29 May 2019 06:58:34 +0000 Subject: [Rust-VMM] Organization membership In-Reply-To: <20190528094937.GG31838@caravaggio> References: <20190528094937.GG31838@caravaggio> Message-ID: > All, > > The community README has a "Become a member of rust-vmm" section [1] describing how to open an issue for being added as a > member of the rust-vmm organization. This seems to imply that organization members have special privileges or even worse, that one > needs to be an organization member to contribute to the project. > > We want to remove that perception because a) members do not have any special permissions on the organization and b) any > registered github user can contribute to the project. > So we want to remove that section and the associated issue template. Any objections? Sounds reasonable. Chao > > Cheers, > Samuel. > > [1] https://github.com/rust-vmm/community#become-a-member-of-rust-vmm > --------------------------------------------------------------------- > Intel Corporation SAS (French simplified joint stock company) Registered headquarters: "Les Montalets"- 2, rue de Paris, > 92196 Meudon Cedex, France > Registration Number: 302 456 199 R.C.S. NANTERRE > Capital: 4,572,000 Euros > > This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or > distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. > > > _______________________________________________ > Rust-vmm mailing list > Rust-vmm at lists.opendev.org > http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm From fandree at amazon.com Thu May 30 15:42:26 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Thu, 30 May 2019 15:42:26 +0000 Subject: [Rust-VMM] vm-memory Pull Requests Message-ID: <1559230945301.79804@amazon.com> Hey everyone, In case you are not following the Slack channel: we are now using Buildkite as the official CI for the vm-memory crate. We changed Travis with Buildkite as the vm-memory is also expected to work on Windows and doing the checks with Travis was not straightforward. Anyway, if you have PRs open, please rebase them on top of the latest commit and fix the errors (if any). Regards, Andreea Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastien.boeuf at intel.com Thu May 30 16:00:43 2019 From: sebastien.boeuf at intel.com (Boeuf, Sebastien) Date: Thu, 30 May 2019 16:00:43 +0000 Subject: [Rust-VMM] vm-memory Pull Requests In-Reply-To: <1559230945301.79804@amazon.com> References: <1559230945301.79804@amazon.com> Message-ID: <1E91073893EF8F498411079ED374F91246075D89@ORSMSX115.amr.corp.intel.com> Thanks for the heads up! ________________________________ From: Florescu, Andreea [fandree at amazon.com] Sent: Thursday, May 30, 2019 8:42 AM To: rust-vmm ML Subject: [Rust-VMM] vm-memory Pull Requests Hey everyone, In case you are not following the Slack channel: we are now using Buildkite as the official CI for the vm-memory crate. We changed Travis with Buildkite as the vm-memory is also expected to work on Windows and doing the checks with Travis was not straightforward. Anyway, if you have PRs open, please rebase them on top of the latest commit and fix the errors (if any). Regards, Andreea Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Thu May 30 16:53:01 2019 From: claire at openstack.org (Claire Massey) Date: Thu, 30 May 2019 11:53:01 -0500 Subject: [Rust-VMM] CFPs Open June-July Message-ID: Hi everyone, Several CFPs are open for events later this year. If you submit any Rust-VMM talks to these or other events please let the group know so we can coordinate. Listed in order by CFP deadline: * 6/15 - KVM Forum in Lyon Oct 30 - Nov 1 - https://events.linuxfoundation.org/events/kvm-forum-2019/ * 6/16 - ONS in Antwerp Sept 23-25 - https://events.linuxfoundation.org/events/open-networking-summit-europe-2019/program/cfp/ * 6/22 - OpenInfra Days Nordics in Stockholm Oct 2-3 - https://openinfranordics.com/ * 7/2 - Open Infrastructure Summit in Shanghai Nov 4-6 - https://cfp.openstack.org/ * 7/12 - KubeCon/CloudNativeCon NA in San Diego Nov 18-21 https://linuxfoundation.smapply.io/prog/kccncna2019/ Thanks, Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: From fandree at amazon.com Thu May 30 16:58:46 2019 From: fandree at amazon.com (Florescu, Andreea) Date: Thu, 30 May 2019 16:58:46 +0000 Subject: [Rust-VMM] CFPs Open June-July In-Reply-To: References: Message-ID: <1559235525137.51331@amazon.com> Hey, Thanks for the CFP list! Samuel and I were planning on submitting a talk to KVM Forum. We're talking bits from previous presentations, but the plan is to make it more technical considering the crowd there. Andreea ________________________________ From: Claire Massey Sent: Thursday, May 30, 2019 7:53 PM To: rust-vmm at lists.opendev.org Subject: [Rust-VMM] CFPs Open June-July Hi everyone, Several CFPs are open for events later this year. If you submit any Rust-VMM talks to these or other events please let the group know so we can coordinate. Listed in order by CFP deadline: * 6/15 - KVM Forum in Lyon Oct 30 - Nov 1 - https://events.linuxfoundation.org/events/kvm-forum-2019/ * 6/16 - ONS in Antwerp Sept 23-25 - https://events.linuxfoundation.org/events/open-networking-summit-europe-2019/program/cfp/ * 6/22 - OpenInfra Days Nordics in Stockholm Oct 2-3 - https://openinfranordics.com/ * 7/2 - Open Infrastructure Summit in Shanghai Nov 4-6 - https://cfp.openstack.org/ * 7/12 - KubeCon/CloudNativeCon NA in San Diego Nov 18-21 https://linuxfoundation.smapply.io/prog/kccncna2019/ Thanks, Claire Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005. -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh at joshtriplett.org Thu May 30 18:39:46 2019 From: josh at joshtriplett.org (Josh Triplett) Date: Thu, 30 May 2019 11:39:46 -0700 Subject: [Rust-VMM] CFPs Open June-July In-Reply-To: References: Message-ID: <20190530183945.GA7596@localhost> On Thu, May 30, 2019 at 11:53:01AM -0500, Claire Massey wrote: > Hi everyone, > > Several CFPs are open for events later this year. If you submit any Rust-VMM talks to these or other events please let the group know so we can coordinate. > > Listed in order by CFP deadline: > * 6/15 - KVM Forum in Lyon Oct 30 - Nov 1 - https://events.linuxfoundation.org/events/kvm-forum-2019/ > * 6/16 - ONS in Antwerp Sept 23-25 - https://events.linuxfoundation.org/events/open-networking-summit-europe-2019/program/cfp/ > * 6/22 - OpenInfra Days Nordics in Stockholm Oct 2-3 - https://openinfranordics.com/ > * 7/2 - Open Infrastructure Summit in Shanghai Nov 4-6 - https://cfp.openstack.org/ > * 7/12 - KubeCon/CloudNativeCon NA in San Diego Nov 18-21 https://linuxfoundation.smapply.io/prog/kccncna2019/ Linux Plumbers Conference is still open as well: https://www.linuxplumbersconf.org/