ipfw2 for dragonflybsd

bycn82 bycn82 at gmail.com
Thu Dec 4 23:39:27 PST 2014


*It will be good to have a try on this "lock-less NAT" in order to know the
performance. but IMHO, I think the performance will not be good especially
when the machines has lots of CPU. Anyway , next will be "lockless" for
"keep-state". maybe this weekend :)*

On Fri, Dec 5, 2014 at 3:11 PM, Sepherosa Ziehau <sepherosa at gmail.com>
wrote:

> On Fri, Dec 5, 2014 at 2:38 AM, Matthew Dillon
> <dillon at apollo.backplane.com> wrote:
> >     On how to make NAT work, what I did in PF was this:
> >
> >     (a) When the port is not locked to a particular number, I simply
> iterate
> >         ports until the toepliz hash for the translated address/port pair
> >         winds up on the same cpu as the toeplez hash of the original.
> >
> >         This way both sides of the NAT conversation wind up on the same
> cpu
> >         and no locking is required.
> >
> >     (b) If the translated port is locked (which is a feature that PF has,
> >         for example), it may not be possible to match up the toeplez
> hash.
> >
> >         In this situation the state goes into a global table with a
> global
> >         lock, and the state is individually locked by the filter.
> >
>
> In addition to what Matt has mentioned, I think lockless NAT could be
> implemented in the following way:
>
> - On output path install state for the current netisr.  And rehash the
> packet then send to the target netisr, and install 'sibling state' in
> the target netisr; do the real output there.
> - Same applies to the input path; but do the protocol input in the
> target netisr.
>
> However, the result may not be better than or as good as
> per-cpu+global lock Matt implemented for the PF, since my way requires
> additional dispatch.
>
> Best Regards,
> sephe
>
> --
> Tomorrow Will Never Die
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.dragonflybsd.org/pipermail/users/attachments/20141205/b184ee3c/attachment-0002.html>


More information about the Users mailing list