GSoC: Add SMT/HT awareness to DragonFlyBSD scheduler

Mihai Carabas mihai.carabas at gmail.com
Sun Jul 8 02:29:36 PDT 2012


--14dae9340eb7cd53b804c44e2225
Content-Type: text/plain; charset=ISO-8859-1

Hello,

This week I worked on an heuristic that sounds like this: always schedule a
process to the closest cpu relatively to the cpu it had run before.(e.g.:
try to run on the cpu that had run before, if it isn't free, try on it's
sibling on the current level (let's say thread level), if no sibling was
found, go to the next level (which is the core level), and so on). So, in
other words, the process would be scheduled to the closest cache it can (it
couldn't be scheduled on the same core, will loose the level1 cache
hotness, but it can use the level3 cache hotness). Unfortunately, I
couldn't find a case to make a difference on my corei3. The reason would be
that the time quanta of a process is big enough in dragonfly that a process
that is scheduled is able to use a great part of the L1/L2 cache and on a
context switch the cache will be invalidated. With the L3 cache shared
among all cores, there will be no gain. I am waiting for access on a
multi-socket machine, to test it there.

--14dae9340eb7cd53b804c44e2225
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,<br><br><div>This week I worked on an heuristic that sounds like this=
: always schedule a process to the closest cpu relatively to the cpu it had=
 run before.(e.g.: try to run on the cpu that had run before, if it isn&#39=
;t free, try on it's sibling on the current level (let's say thread=
 level), if no sibling was found, go to the next level (which is the core l=
evel), and so on). So, in other words, the process would be scheduled to th=
e closest cache it can (it couldn't be scheduled on the same core, will=
 loose the level1 cache hotness, but it can use the level3 cache hotness). =
Unfortunately, I couldn't find a case to make a difference on my corei3=
. The reason would be that the time quanta of a process is big enough in dr=
agonfly that a process that is scheduled is able to use a great part of the=
 L1/L2 cache and on a context switch the cache will be invalidated. With th=
e L3 cache shared among all cores, there will be no gain. I am waiting for =
access on a multi-socket machine, to test it there.</div>

--14dae9340eb7cd53b804c44e2225--





More information about the Kernel mailing list