Core 0 constantly bogged down by interrupts

Mike Zhang furry_for_puffy at runbox.com
Sun Dec 23 18:04:54 PST 2018


Thank you for the quick response :) See below:

interrupt                                total       rate
irq4     0: sio0                             0          0
irq9     0: acpi0                    116480429      16186
irq17    0: xhci0                       138974         19
irq18    0: vgapci0                    1019997        141
irq192   0: swi_siopoll                      0          0
irq196   0: swi_vm                           0          0
irq1     1: atkbd0                           0          0
irq16    1: ahci0                       166439         23
irq197   1: swi_mp_taskq/swi_taskq       24450          3
irq16    2: em0                          21106          2
irq16    3: hdac0                           82          0
irq195   3: swi_cambio                  168341         23
Total                                118019818      16400

I guess what I'm seeing is one of those ACPI interrupt storms?

Mike


On Sun, Dec 23, 2018 at 05:37:55PM PST, Sepherosa Ziehau wrote:
> Please post the result of 'vmstat -iv'
> 
> Thanks,
> sephe
> 
> On Mon, Dec 24, 2018 at 9:30 AM Mike Zhang <furry_for_puffy at runbox.com> wrote:
> >
> > Hi everyone, newbie here...
> >
> > I'm in the process of migrating to DragonFly for my main desktop as I
> > figured it would make more efficient use of SMP than my former OS of choice.
> > Ironically the problem I'm running into is core 0 seems to be constantly
> > maxing out with about 60-70% going to Interrupts and the rest to System.
> > This happens both under heavy loads:
> >
> > > load averages:  4.03,  4.38,  4.49;               up 14+03:18:26       13:34:08
> > > 90 processes: 6 running, 90 active
> > > CPU states:  0.0% user,  2.8% nice, 39.8% system, 57.4% interrupt,  0.0% idle
> > > CPU states:  0.0% user, 35.7% nice,  8.4% system,  0.0% interrupt, 55.9% idle
> > > CPU states:  0.0% user, 82.5% nice,  5.6% system,  0.0% interrupt, 11.9% idle
> > > CPU states:  0.0% user, 14.7% nice,  2.8% system,  0.0% interrupt, 82.5% idle
> > > Memory: 2319M Active, 3665M Inact, 5464M Wired, 493M Cache, 1592M Buf, 3679M Free
> > > Swap: 32G Total, 81M Used, 32G Free
> >
> > And while mostly (or entirely) idle:
> >
> > > load averages:  0.44,  0.37,  0.30;               up 1+03:14:08        15:36:26
> > > 56 processes: 1 running, 56 active
> > > CPU states:  0.0% user,  0.7% nice, 24.6% system, 71.2% interrupt,  3.5% idle
> > > CPU states:  0.0% user,  0.0% nice,  0.7% system,  0.0% interrupt, 99.3% idle
> > > CPU states:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
> > > CPU states:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
> > > Memory: 647M Active, 6448M Inact, 3769M Wired, 766M Cache, 1523M Buf, 3991M Free
> > > Swap: 32G Total, 63M Used, 32G Free
> >
> > It even happens in single user mode:
> >
> > > load averages:  0.11,  0.05,  0.02;               up 0+00:01:46        16:01:04
> > > 2 processes: 1 running, 2 active
> > > CPU states:  0.0% user,  0.0% nice, 21.0% system, 76.2% interrupt,  2.8% idle
> > > CPU states:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
> > > CPU states:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
> > > CPU states:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
> > > Memory: 1188K Active, 128K Inact, 1112M Wired, 4680K Cache, 30M Buf, 14G Free
> > > Swap:
> > >
> > >    PID USERNAME   NICE  SIZE    RES    STATE   C   TIME   CTIME    CPU COMMAND
> > >     42 root         0  4392K  2220K     CPU2   2   0:00    0:00  0.00% top
> > >     20 root         0  4320K  1800K     wait   1   0:00    0:00  0.00% sh
> >
> > I am using a GENERIC kernel from master on an i5-6600 @ 3.30GHz, with i915
> > for graphics and my NIC set to poll.  My FS is HAMMER on a dm-crypt device
> > and this is an EFI install on a single hard drive.
> >
> > I'm not sure what it is that I'm doing wrong here but any pointers would be
> > much appreciated...
> >
> > Mike
> >
> >
> 
> 
> -- 
> Tomorrow Will Never Die
> 



More information about the Users mailing list