[Ksummit-2012-discuss] [ATTEND] <ding> "Bring out your dead" <ding>...

Paul Gortmaker paul.gortmaker at windriver.com
Fri Jun 29 15:28:41 UTC 2012


On 12-06-28 10:28 PM, Steven Rostedt wrote:
> On Mon, 2012-06-25 at 20:03 -0400, Paul Gortmaker wrote:
> 
>>  1) Once you step outside of the mainstream arch and/or into the hobby
>>     realm, you are pretty much guaranteed to not be using a distro, but
>>     rather a custom build, custom .config and even custom kernel.  But
>>     of course you have the source too.  How else could you have
>>     created your custom build?  So, you are free to revert/patch back in
>>     whatever driver(s) or support you want, and share it (made easy with
>>     github) with whoever you want.  If there is interest in it, it will
>>     survive.  Even if you are the one and only person interested, you've
>>     still made it survive.  The actual location of the source code really
>>     is not of unconditional importance.
> 
> Although, if there's one person willing to keep it alive, why not let it
> stay in the kernel. But place more burden on that one person to make
> sure updates work for it.

As to your why question, I touched on this in the original post: 

  For an API update, we end up editing files
  that are no longer actively used.  For a "git grep" you've searched
  files that you shouldn't need to care about.  For an "allyesconfig" you
  (and all the automated build tests across the planet) have waited on the
  successful outcome of a compilation that doesn't matter.  For runs of
  Coccinelle/sparse/coverity, you are doing analysis on (and proposing
  fixes to) code that has expired from use.  No one item is huge, but they
  add up, and they affect us all.

I can think of a couple more.  Recall when JeffK sorted all the
network drivers?  Any time we do a reorg of stuff, we are in
making Makefile/Kconfig changes for drivers that haven't been
used for years.  Or when you meander into a cold git repo
and type git status and then wait while we stat files that
nobody has touched in a decade.

I agree that we should actively strive to shift the loading for
maintaining feature X to the people who care about X.  The
interesting case is where we get some cool new change to a
a core area or arch.  Do we block a complex change giving a
10% speedup on x86 because nobody is around to make the similar
deep thought asm changes on some of the more fringe arch?
Is the onus on the person proposing the change to coordinate
all the arch before moving ahead?   I realize that there are
no hard yes/no answers here...

> 
> Remember, the birth of Linux was for that one guy with the hobby. Lets
> not stomp on him now that Linux is in the big leagues.

Fortunately the choice and general availability of hardware has
gone up and the cost has gone down.  Meaning that the hobbyist
isn't confined to working with ancient stuff, like we were in
the early 90's.  Regardless of that, I'm not suggesting we go
all super elite snob mode and toss out everything that is more
than X years old.  I'm just suggesting we do toss out the stuff
that isn't being actively used, or couldn't be used even if
you wanted to (i.e. non-functional due to bit-rot).

The linux-legacy idea is interesting, in that it keeps the stuff
in a common well known place, so the hobbyist can still find it
easily.  It also means that it can move at its own pace, based
on who has time to work on which fringe bits.  And all the
incremental cost issues I quoted above largely go away, or
at least get loaded onto the people who care about that code.

It can even apply its own set of standards, i.e. someone could
revive xiafs and put it in tree if they wanted to.  They could
also choose their own branching strategy, if say there was
a showstopper change that prevented arch=foo from being
carried beyond v3.x due to overwhelming difficulty, then keep
a branch for v3.x and arch=foo.  Months later and some guru
fixes arch=foo to boot on v3.(x+1) and then you can delete
the old branch, etc. -- all at their own pace. The more I think
about it, the better it seems.

> 
>>
>>  2) With point #1 in mind, there simply isn't the absolute finality
>>     associated with mainline removal in linux as there would be in the
>>     binary world, say if Windows8 said they were dropping Adaptec AHA1542
>>     SCSI support (assuming it isn't long gone already.)
>>
>>  3) Things that are being considered for removal are least likely to
>>     see an advantage by being on the bleeding edge.  Take the MCA case.
>>     If you'll allow me the conclusion that MCA on 3.5 wouldn't have
>>     been any fundamentally better than MCA on 3.4 -- then you can
>>     get on the (semi announced) 3.4 LTSI train, and have a supported
>>     kernel base for your MCA that will take you well into the year 2014!
> 
> I was also thinking that perhaps this could be the rational for upping
> the major number. I know Linus said that the major number is just that,
> a number, but instead of a number meaning what features and archs are
> being added, have it mean what features and archs are being removed :)

If the opportunity arose, and we could take advantage of it to do
that, then sure.  But the last big version change wasn't something
that was really planned years in advance (at least not as far as I know).

> 
> I'm sure Linus wont go for that either. But if we do up the major number
> every 10 years, we can make announcements years a head of time saying
> what's going to be dropped at that point. Have people speak up and say
> they will (and actively do) maintain it, otherwise it's gone.
> 
> A test of maintainership, is simply letting the legacy code break. If it
> does not get fixed in a few years, rip it out. If people are maintaining
> it out of tree, and not pushing patches into the mainline kernel, then
> they can keep all the code out of tree and not burden the rest of the
> developers with it.

This is the default action now, i.e. leave it in tree and let it rot
there until it becomes painfully obvious that there is no way anyone
could be using it.  But in doing so, we steal away time from other
people who are doing the right thing by making sure their changesets
extend into the unused code, to update APIs or prevent compile fails.
All I'm suggesting is a slightly more proactive approach, based on a
realistic consideration of the "Is anyone really using this?" question.

> 
> And one final note... +1 on your subject reference ;-)

:)

Paul.
--

> 
> -- Steve
> 
> 


More information about the Ksummit-2012-discuss mailing list