[Rust-VMM] Call for GSoC and Outreachy project ideas for summer 2022

Stefan Hajnoczi stefanha at gmail.com
Tue Feb 22 09:48:20 UTC 2022


On Mon, 21 Feb 2022 at 12:00, Klaus Jensen <its at irrelevant.dk> wrote:
>
> On Feb 21 09:51, Stefan Hajnoczi wrote:
> > On Mon, 21 Feb 2022 at 06:14, Klaus Jensen <its at irrelevant.dk> wrote:
> > >
> > > On Jan 28 15:47, Stefan Hajnoczi wrote:
> > > > Dear QEMU, KVM, and rust-vmm communities,
> > > > QEMU will apply for Google Summer of Code 2022
> > > > (https://summerofcode.withgoogle.com/) and has been accepted into
> > > > Outreachy May-August 2022 (https://www.outreachy.org/). You can now
> > > > submit internship project ideas for QEMU, KVM, and rust-vmm!
> > > >
> > > > If you have experience contributing to QEMU, KVM, or rust-vmm you can
> > > > be a mentor. It's a great way to give back and you get to work with
> > > > people who are just starting out in open source.
> > > >
> > > > Please reply to this email by February 21st with your project ideas.
> > > >
> > > > Good project ideas are suitable for remote work by a competent
> > > > programmer who is not yet familiar with the codebase. In
> > > > addition, they are:
> > > > - Well-defined - the scope is clear
> > > > - Self-contained - there are few dependencies
> > > > - Uncontroversial - they are acceptable to the community
> > > > - Incremental - they produce deliverables along the way
> > > >
> > > > Feel free to post ideas even if you are unable to mentor the project.
> > > > It doesn't hurt to share the idea!
> > > >
> > > > I will review project ideas and keep you up-to-date on QEMU's
> > > > acceptance into GSoC.
> > > >
> > > > Internship program details:
> > > > - Paid, remote work open source internships
> > > > - GSoC projects are 175 or 350 hours, Outreachy projects are 30
> > > > hrs/week for 12 weeks
> > > > - Mentored by volunteers from QEMU, KVM, and rust-vmm
> > > > - Mentors typically spend at least 5 hours per week during the coding period
> > > >
> > > > Changes since last year: GSoC now has 175 or 350 hour project sizes
> > > > instead of 12 week full-time projects. GSoC will accept applicants who
> > > > are not students, before it was limited to students.
> > > >
> > > > For more background on QEMU internships, check out this video:
> > > > https://www.youtube.com/watch?v=xNVCX7YMUL8
> > > >
> > > > Please let me know if you have any questions!
> > > >
> > > > Stefan
> > > >
> > >
> > > Hi,
> > >
> > > I'd like to revive the "NVMe Performance" proposal from Paolo and Stefan
> > > from two years ago.
> > >
> > >   https://wiki.qemu.org/Internships/ProjectIdeas/NVMePerformance
> > >
> > > I'd like to mentor, but since this is "iothread-heavy", I'd like to be
> > > able to draw a bit on Stefan, Paolo if possible.
> >
> > Hi Klaus,
> > I can give input but I probably will not have enough time to
> > participate as a full co-mentor or review every line of every patch.
> >
>
> Of course Stefan, I understand - I did not expect you to co-mentor :)
>
> > If you want to go ahead with the project, please let me know and I'll post it.
> >
>
> Yes, I'll go ahead as mentor for this.
>
> @Keith, if you want to join in, let us know :)
>
> > One thing I noticed about the project idea is that KVM ioeventfd
> > doesn't have the right semantics to emulate the traditional Submission
> > Queue Tail Doorbell register. The issue is that ioeventfd does not
> > capture the value written by the driver to the MMIO register. eventfd
> > is a simple counter so QEMU just sees that the guest has written but
> > doesn't know which value. Although ioeventfd has modes for matching
> > specific values, I don't think that can be used for NVMe Submission
> > Queues because there are too many possible register values and each
> > one requires a separate file descriptor. It might request 100s of
> > ioeventfds per sq, which won't scale.
> >
> > The good news is that when the Shadow Doorbell Buffer is implemented
> > and enabled by the driver, then I think it becomes possible to use
> > ioeventfd for the Submission Queue Tail Doorbell.
> >
>
> Yes, I agree.
>
> > I wanted to mention this so applicants/interns don't go down a dead
> > end trying to figure out how to use ioeventfd for the traditional
> > Submission Queue Tail Doorbell register.
> >
>
> Yeah, thats what the Shadow Doorbell mechanic is for.
>
> Suggested updated summary:
>
> QEMU's NVMe emulation uses the traditional trap-and-emulation method to
> emulate I/Os, thus the performance suffers due to frequent VM-exits.
> Version 1.3 of the NVMe specification defines a new feature to update
> doorbell registers using a Shadow Doorbell Buffer. This can be utilized
> to enhance performance of emulated controllers by reducing the number of
> Submission Queue Tail Doorbell writes.
>
> Further more, it is possible to run emulation in a dedicated thread
> called an IOThread. Emulating NVMe in a separate thread allows the vcpu
> thread to continue execution and results in better performance.
>
> Finally, it is possible for the emulation code to watch for changes to
> the queue memory instead of waiting for doorbell writes. This technique
> is called polling and reduces notification latency at the expense of an
> another thread consuming CPU to detect queue activity.
>
> The goal of this project is to add implement these optimizations so
> QEMU's NVMe emulation performance becomes comparable to virtio-blk
> performance.
>
> Tasks include:
>
>     Add Shadow Doorbell Buffer support to reduce doorbell writes
>     Add Submission Queue polling
>     Add IOThread support so emulation can run in a dedicated thread
>
> Maybe add a link to this previous discussion as well:
>
> https://lore.kernel.org/qemu-devel/1447825624-17011-1-git-send-email-mlin@kernel.org/T/#u

Great, I have added the project idea. I left in the sq doorbell
ioeventfd task but moved it after the Shadow Doorbell Buffer support
task and made it clear that the ioeventfd can only be used when the
Shadow Doorbell Buffer is enabled:
https://wiki.qemu.org/Google_Summer_of_Code_2022#NVMe_Emulation_Performance_Optimization

Stefan



More information about the Rust-vmm mailing list