Why VMware Virtual disk so slow (da0: 3.300MB/s transfers) ?

Dongsheng Song dongsheng.song at gmail.com
Sun Aug 25 09:44:18 PDT 2013


On Sat, Aug 24, 2013 at 4:33 PM, Sascha Wildner <saw at online.de> wrote:
> On Sat, 24 Aug 2013 09:45:02 +0200, Dongsheng Song
> <dongsheng.song at gmail.com> wrote:
>
>> On Sat, Aug 24, 2013 at 11:52 AM, Sascha Wildner <saw at online.de> wrote:
>>>
>>> [...]
>>>
>>> Could you try with this patch please?
>>>
>>> http://leaf.dragonflybsd.org/~swildner/cam_probe.diff
>>>
>>> Sascha
>>
>>
>> No IO speed difference (still 0.200 ~ 0.400 MB/s with direct IO).
>>
>> The dmesg output just add new line:
>> da0: Command Queueing Enabled
>
>
> Ok, it might have been incomplete. Can you try this patch instead?
>
> http://leaf.dragonflybsd.org/~swildner/cam_probe2.diff
>
> It includes the first patch, so undo the first one before applying this one.
>
> If that too doesn't help, can you check how many dev_openings it shows with
> 'camcontrol tags da0 -v'? mintags and maxtags give the limits. If
> dev_openings is 1, can you check if raising helps, which should work with
> 'camcontrol tags da0 -N <number>', within the limits.
>
> Sascha

dmesg output changed to 'da0: 320.000MB/s transfers (160.000MHz,
offset 127, 16bit)':

# dmesg | grep -E "da0|mpt0"
mpt0: <LSILogic 1030 Ultra4 Adapter> port 0x1400-0x14ff mem
0xd0020000-0xd003ffff,0xd0040000-0xd005ffff irq 11 at device 16.0 on
pci0
mpt0: MPI Version=1.2.0.0
disk scheduler: set policy of da0 to noop
da0 at mpt0 bus 0 target 0 lun 0
da0: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit)
da0: Command Queueing Enabled
da0: 65536MB (134217728 512 byte sectors: 255H 63S/T 8354C)
Mounting root from ufs:/dev/da0s1a

# camcontrol tags da0 -v
(pass1:mpt0:0:0:0): dev_openings  127
(pass1:mpt0:0:0:0): dev_active    0
(pass1:mpt0:0:0:0): devq_openings 127
(pass1:mpt0:0:0:0): devq_queued   0
(pass1:mpt0:0:0:0): held          0
(pass1:mpt0:0:0:0): mintags       2
(pass1:mpt0:0:0:0): maxtags       255

During the IO test, 'dev_active' changed to about 100.

Then IO speed got much improve, I got 49.0 MB/s with direct IO.
FreeBSD 9.1 give me 50.5 MB/s with direct IO.

This test confirmed command queue can improve IO performance
dramatically, thank you very much !

Then I try to revert the patch,  change dev_openings  from 1 to 20, I
found I can got nearly the same results.



More information about the Users mailing list