ipfw2 for dragonflybsd

Sepherosa Ziehau sepherosa at gmail.com
Fri Dec 5 01:25:06 PST 2014


On Fri, Dec 5, 2014 at 3:39 PM, bycn82 <bycn82 at gmail.com> wrote:
> It will be good to have a try on this "lock-less NAT" in order to know the
> performance. but IMHO, I think the performance will not be good especially
> when the machines has lots of CPU.


Number of CPUs does not matter that much, since its bwteen at most two
CPUs (two netisrs).\

Best Regards,
sephe


>
> Anyway , next will be "lockless" for "keep-state". maybe this weekend :)
>
>
> On Fri, Dec 5, 2014 at 3:11 PM, Sepherosa Ziehau <sepherosa at gmail.com>
> wrote:
>>
>> On Fri, Dec 5, 2014 at 2:38 AM, Matthew Dillon
>> <dillon at apollo.backplane.com> wrote:
>> >     On how to make NAT work, what I did in PF was this:
>> >
>> >     (a) When the port is not locked to a particular number, I simply
>> > iterate
>> >         ports until the toepliz hash for the translated address/port
>> > pair
>> >         winds up on the same cpu as the toeplez hash of the original.
>> >
>> >         This way both sides of the NAT conversation wind up on the same
>> > cpu
>> >         and no locking is required.
>> >
>> >     (b) If the translated port is locked (which is a feature that PF
>> > has,
>> >         for example), it may not be possible to match up the toeplez
>> > hash.
>> >
>> >         In this situation the state goes into a global table with a
>> > global
>> >         lock, and the state is individually locked by the filter.
>> >
>>
>> In addition to what Matt has mentioned, I think lockless NAT could be
>> implemented in the following way:
>>
>> - On output path install state for the current netisr.  And rehash the
>> packet then send to the target netisr, and install 'sibling state' in
>> the target netisr; do the real output there.
>> - Same applies to the input path; but do the protocol input in the
>> target netisr.
>>
>> However, the result may not be better than or as good as
>> per-cpu+global lock Matt implemented for the PF, since my way requires
>> additional dispatch.
>>
>> Best Regards,
>> sephe
>>
>> --
>> Tomorrow Will Never Die
>
>



-- 
Tomorrow Will Never Die



More information about the Users mailing list