[Ksummit-2012-discuss] [ATTEND] stable kernel stuff and grumpy maintainers [bisection/rebase/-next]

Jason Wessel jason.wessel at windriver.com
Tue Jun 19 17:38:53 UTC 2012

On 06/19/2012 12:23 PM, Steven Rostedt wrote:
> On Tue, 2012-06-19 at 12:08 -0500, Jason Wessel wrote:
>> I don't know how many cycles are available in linux-next builds, but I
>> wonder if about the collaboration aspect with the guy who said he
>> could do 25,000 kernel compiles a day.  Of interest to me is some
>> reporting and encouragement to fixing bisection quality in the
>> kernel as a whole.
> This was the very reason I created ktest.pl. The first test that was
> created for it was the 'patchcheck' test. It works with a git repo (but
> may fail with merges), you give it a starting commit and an ending
> commit and it will build, boot, and test each commit to ensure that it
> will bisect nicely.
> The patchcheck, is also the only test that will check the files that the
> commit touches, and fail if a file produces a warning during build. I
> need to add a flag that disables this, but it is very useful.

Are you saying that patchcheck can check a single patch for the scope
of what it changed and potentially output all the KCONFIG options that
affect any macro definition changes or ifdefs in the code and generate
a complete series of X number of kernels to compile to prove you
didn't break anything?  If so, I want that tool, I didn't think such a
thing existed.  Any randconfig takes you only so far, where as you
could weight things against #ifdefs that are in the file, vs #ifdefs
that are in the include files, vs the macros used in the file and
their origin etc...  The patchcheck really only appears to compile
a single case for bisection, but one never knows what ktest.pl will
have it in by the end of the year. :-)

Bisection with one config is one thing, and yes, I too have
"framework" for that.  Bisection with arbitrary configs and the
dependent sets of variables that affect a subsystem is another.  I
have a very, very minimal way that I test this with a list of specific
options and any previous case where I have broken linux-next is in
that list of course, and of course I wish it never happened in the
first place.

On a slightly different tangent, I would love to have a tool to help
bisect .config runtime failures.  The ktest.pl is the closest thing I
know of, it doesn't work for some cases.  Luckily I don't have to do
this too often.  The last time I had to do it, I almost created a
tool, because ktest will not handle the case of bad not being a subset
of good config options as of yet.


More information about the Ksummit-2012-discuss mailing list