<div dir="ltr"><b>Hi Venkatesh,</b><div><br></div><div>Thanks a lot for a nice explanatory mail! </div><div><br></div><div>I found the second project quite interesting, just to repeat -- </div><div><span style="font-family:arial,sans-serif;font-size:13px">** virtio-blk currently 'kicks' the host VMM directly from its</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px"> strategy() routine; it may make sense to defer this to a taskqueue. This</span><br style="font-family:arial,sans-serif;font-size:13px"><span style="font-family:arial,sans-serif;font-size:13px"> is a tiny change, but understanding the performance implications may</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px"> take a bit longer.</span><br></div><div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div><font face="arial, sans-serif">As per my understanding, the notification to host through kick() involves the exit of the guest (as per attached paper). Hence, the aim for this project would also be to minimize the exits, or rather instantaneous exits. Deferring it to the task queue should definitely help. Other things that can be done in this project is batching of the buffers before kick() and dynamically deciding how many buffers can be batched together. </font></div>
<div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">Also, not all of the times deferring the 'kick' to taskqueue might not be a good idea. Finding those scenarios would be interesting where it helps, and where it is merely an overhead.</font></div>
<div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">I am also interested in performance analysis for the same. It would be great if you can point me to some more references to gain the prerequisite knowledge for the same.</font></div>
<div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div class="gmail_extra"><div><b><div><b>---------------------------- <br></b></div>Thanks & Regards<br><font color="#888888">Mohit Dhingra <br>
+919611190435</font></b></div>
<br><br><div class="gmail_quote">On 5 March 2013 04:39, Venkatesh Srinivas <span dir="ltr"><<a href="mailto:vsrinivas@ops101.org" target="_blank">vsrinivas@ops101.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Fri, Mar 01, 2013 at 10:20:18PM +0530, Mohit Dhingra wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
*Hi All,<div class="im"><br>
<br>
I am a final year student, Masters by Research, at Indian Institute of<br>
Science Bangalore. I want to work in GSoC 2013. I did a survey of projects<br>
that people did earlier, and I am interested in the projects related to<br>
device drivers and virtualization. I looked at "porting virtio device<br>
drivers from NetBSD to DragonFly BSD". I have done a research project on<br>
virtualization and presented a paper "Resource Usage Monitoring in Clouds"<br>
in IEEE/ACM Grid in Beijing last year. Can someone please suggest some<br>
topics related to this which are yet to be implemented on DragonFly BSD?<br>
</div></blockquote>
<br>
Hi,<br>
<br>
What area(s) of virtualization are you interested in? DFly as a guest?<br>
As a host? What VMMs?<br>
<br>
If you're interested in DragonFly as a virtualization guest on<br>
qemu/KVM or an other platform that exports virtio paravirtualized<br>
devices, there is some work left on the virtio guest drivers --<br>
<br>
* virtio-blk:<br>
I cleaned up and brought in Tim Bisson's port of FreeBSD's virtio-blk<br>
driver in January, based on work dating back to last April. The driver<br>
works okay, but has a few issues:<br>
<br>
** qemu exports a 128-entry virtqueue for virtio-blk; DragonFly as a<br>
guest doesn't support indirect ring entries and can issue up to 64 KiB<br>
I/Os, so we can very easily slam into the virtqueue size limit. If we<br>
force qemu to expose a larger virtqueue, we can reach ~95% of host<br>
throughput. Virtio supports up to 16k-entry virtqueue, but qemu+SeaBIOS<br>
can only boot from 128-entry or smaller queues. Adding indirect ring<br>
support would help w/ bandwidth here; this is a pretty small project.<br>
<br>
** virtio-blk currently 'kicks' the host VMM directly from its<br>
strategy() routine; it may make sense to defer this to a taskqueue. This<br>
is a tiny change, but understanding the performance implications may<br>
take a bit longer.<br>
<br>
** virtio-blk doesn't support dumps (crash/panics). Fixing this would be<br>
pretty straightforward and small in scope.<br>
<br>
* virtio-net:<br>
I have a port of virtio-net, again based on Tim's work,<br>
but it is usually testing as slower than em (e1000) on netperf<br>
TCP_STREAM/TCP_RR tests. Improving on this port, re-adding indirect<br>
ring support (much less of a win for virtio-net compared to -blk),<br>
and implementing multiqueue would perhaps be large enough for a SoC<br>
project.<br>
<br>
* Our virtio drivers don't support MSI-X; this is not a huge deal for<br>
virtio-blk, but virtio-net would really benefit from it.<br>
<br>
* There are other virtio devices; virtio-scsi is a paravirtualized SCSI<br>
adapter, there is a FreeBSD driver that could serve as a good start<br>
for a port. virtio-scsi allows multiple targets per-PCI device, which<br>
is nice, and newer versions of the adapter support multiple request<br>
queues. Porting this and implementing multiqueue would be a nice project too.<br>
<br>
* When running DragonFly as a guest on KVM, we take a lot of VM exits,<br>
particularly to our host/VMM's APIC device. We could implement a<br>
kvmclock time source to avoid some timer exits and support for the<br>
paravirtualized EOI (<a href="https://lwn.net/Articles/502176/" target="_blank">https://lwn.net/Articles/<u></u>502176/</a>) in our platform<br>
code to cut down on VM exits.<br>
<br>
---<br>
DragonFly also has a neat mechanism to allow DFly kernels to run as user<br>
processes on itself ('vkernels'). If you are interested in vkernels,<br>
there is a fair bit of performance work that could be done on them:<br>
<br>
* vkernels currently use a shadow translation scheme for virtual memory<br>
for its guests. (see <a href="http://acm.jhu.edu/~me/vpgs.c" target="_blank">http://acm.jhu.edu/~me/vpgs.c</a> for an example of<br>
how that interface works). The shadow translation scheme works on<br>
every x86/x86-64 CPU, but on CPUs w/ second generation virtualization<br>
(RVI on AMD or EPT on Intel/Via), we can do much better using those<br>
hardware guest pagetable walkers.<br>
<br>
* DragonFly BSD has simple process checkpointing/restore. In 2011, Irina<br>
Presa worked on a project to enable checkpoint save/restore of a<br>
virtual kernel<br>
(<a href="http://leaf.dragonflybsd.org/mailarchive/kernel/2011-04/msg00008.html" target="_blank">http://leaf.dragonflybsd.org/<u></u>mailarchive/kernel/2011-04/<u></u>msg00008.html</a>).<br>
Continuing this GSoC would be pretty neat, the ability to freeze and<br>
thaw a virtual kernel would allow all sorts of interesting use cases.<br>
<br>
* Virtual kernel host code have simple implementations of<br>
paravirtualized devices, vkdisk and vknet; understanding their<br>
performance and perhaps replacing them with host-side virtio devices<br>
might be worthwhile.<br>
<br>
---<br>
A brave last project would be to re-implement the Linux KVM (or some<br>
subset-of) interface on DragonFly. If you're interested in this kind<br>
of project, I'd be happy to explain a bit further...<br>
<br>
<br>
Take care,<br>
-- vs;<br>
<a href="http://ops101.org/" target="_blank">http://ops101.org/</a><br>
</blockquote></div><br></div></div>