[linux-pm] [RFC][PATCH 2/2] PM: Rework handling of interrupts during suspend-resume

Ingo Molnar mingo at elte.hu
Mon Feb 23 03:04:33 PST 2009


* Eric W. Biederman <ebiederm at xmission.com> wrote:

> > What makes s2ram fragile is not human failure but the 
> > combination of a handful of physical property:
> >
> > 1) Psychology: shutting the lid or pushing the suspend button is 
> >    a deceivingly 'simple' action to the user. But under the 
> >    hood, a ton of stuff happens: we deinitialize a lot of 
> >    things, we go through _all hardware state_, and we do so in a 
> >    serial fashion. If just one piece fails to do the right 
> >    thing, the box might not resume. Still, the user expects this 
> >    'simple' thing to just work, all the time. No excuses 
> >    accepted.
> >
> > 2) Length of code: To get a successful s2ram sequence the kernel
> >    runs through tens of thousands of lines of code. Code which
> >    never gets executed on a normal box - only if we s2ram. If 
> >    just one step fails, we get a hung box.
> >
> > 3) Debuggability: a lot of s2ram code runs with the console off, 
> >    making any bugs hard to debug. Furthermore we have no 
> >    meaningful persistent storage either for kernel bug messages. 
> >    The RTC trick of PM_DEBUG works but is a very narrow channel 
> >    of information and it takes a lot of time to debug a bug via 
> >    that method.
> 
> Yep that is an issue.

I'd also like to add #4:

     4) One more thing that makes s2ram special is that when the 
        resume path finds hardware often in an even more 
        deinitialized form than during normal bootup. During
        normal bootup the BIOS/firmware has at least done some
        minimal bootstrap (to get the kernel loaded), which
        makes life easier for the kernel.

        At s2ram stage we've got a completely pure hardware
        init state, with very minimal firmware activation. So 
        many of the init and deinit problems and bugs we only 
        hit in the s2ram path - which dynamics is again not 
        helpful.

> > The combination of these factors really makes up for a 
> > perfect storm in terms of kernel technology: we have this 
> > very-deceivingly-simple-looking but 
> > complex-and-rarely-executed piece of code, which is very 
> > hard to debug.
> 
> And much of this as you are finding with this piece of code is 
> how the software was designed rather then how the software 
> needed to be.

Well most of the 4 problems above are externalities and cannot 
go away just by fixing the kernel.

 #1 will always be with us.
 #3 needs the hardware to change. It's happening, but slowly.
 #4 will be with us as long as there's non-Linux BIOSes

#2 is the only thing where we can make a realistic difference,
but there's just so much we can do there.

And that still leaves the other three items: each of which is 
powerful enough of a force to give a bad name to any normal 
subsystem.

	Ingo


More information about the linux-pm mailing list