[Ksummit-2012-discuss] [ATTEND] <ding> "Bring out your dead" <ding>...
rostedt at goodmis.org
Fri Jun 29 16:06:29 UTC 2012
On Fri, 2012-06-29 at 11:28 -0400, Paul Gortmaker wrote:
> As to your why question, I touched on this in the original post:
> For an API update, we end up editing files
> that are no longer actively used. For a "git grep" you've searched
> files that you shouldn't need to care about. For an "allyesconfig" you
> (and all the automated build tests across the planet) have waited on the
> successful outcome of a compilation that doesn't matter. For runs of
> Coccinelle/sparse/coverity, you are doing analysis on (and proposing
> fixes to) code that has expired from use. No one item is huge, but they
> add up, and they affect us all.
> I can think of a couple more. Recall when JeffK sorted all the
> network drivers? Any time we do a reorg of stuff, we are in
> making Makefile/Kconfig changes for drivers that haven't been
> used for years. Or when you meander into a cold git repo
> and type git status and then wait while we stat files that
> nobody has touched in a decade.
First off, I need to say that I'm definitely one that is affected by all
this. There has been several times when working on either ftrace or -rt
that the thought has come across as "gee, if we do this, we must update
all these drivers/archs". The function tracing is a per arch thing, but
luckily, as it is still new, only active archs have it implemented.
I also regularly perform allmodconfig and allyesconfig builds. This is a
burden on myself and I really feel a need to purge. But I'm also at a
point where I hate to remove something that is being used, if only
> I agree that we should actively strive to shift the loading for
> maintaining feature X to the people who care about X. The
> interesting case is where we get some cool new change to a
> a core area or arch. Do we block a complex change giving a
> 10% speedup on x86 because nobody is around to make the similar
> deep thought asm changes on some of the more fringe arch?
Been there ;-)
> Is the onus on the person proposing the change to coordinate
> all the arch before moving ahead? I realize that there are
> no hard yes/no answers here...
My solution is basically make a half ass effort to port the change to
the arch, and email it out to the maintainers. For active archs, I do
much more than an half ass effort, and will usually get in contact with
the maintainers for a real solution. But for those weird archs, if the
changes are ignored, then the half ass effort is what goes in place. If
any change at all is done (just make it compile, if possible).
> > Remember, the birth of Linux was for that one guy with the hobby. Lets
> > not stomp on him now that Linux is in the big leagues.
> Fortunately the choice and general availability of hardware has
> gone up and the cost has gone down. Meaning that the hobbyist
> isn't confined to working with ancient stuff, like we were in
> the early 90's. Regardless of that, I'm not suggesting we go
> all super elite snob mode and toss out everything that is more
> than X years old. I'm just suggesting we do toss out the stuff
> that isn't being actively used, or couldn't be used even if
> you wanted to (i.e. non-functional due to bit-rot).
I think we agree here.
> The linux-legacy idea is interesting, in that it keeps the stuff
> in a common well known place, so the hobbyist can still find it
> easily. It also means that it can move at its own pace, based
> on who has time to work on which fringe bits. And all the
> incremental cost issues I quoted above largely go away, or
> at least get loaded onto the people who care about that code.
I agree here too.
> It can even apply its own set of standards, i.e. someone could
> revive xiafs and put it in tree if they wanted to. They could
> also choose their own branching strategy, if say there was
> a showstopper change that prevented arch=foo from being
> carried beyond v3.x due to overwhelming difficulty, then keep
> a branch for v3.x and arch=foo. Months later and some guru
> fixes arch=foo to boot on v3.(x+1) and then you can delete
> the old branch, etc. -- all at their own pace. The more I think
> about it, the better it seems.
> >> 2) With point #1 in mind, there simply isn't the absolute finality
> >> associated with mainline removal in linux as there would be in the
> >> binary world, say if Windows8 said they were dropping Adaptec AHA1542
> >> SCSI support (assuming it isn't long gone already.)
> >> 3) Things that are being considered for removal are least likely to
> >> see an advantage by being on the bleeding edge. Take the MCA case.
> >> If you'll allow me the conclusion that MCA on 3.5 wouldn't have
> >> been any fundamentally better than MCA on 3.4 -- then you can
> >> get on the (semi announced) 3.4 LTSI train, and have a supported
> >> kernel base for your MCA that will take you well into the year 2014!
> > I was also thinking that perhaps this could be the rational for upping
> > the major number. I know Linus said that the major number is just that,
> > a number, but instead of a number meaning what features and archs are
> > being added, have it mean what features and archs are being removed :)
> If the opportunity arose, and we could take advantage of it to do
> that, then sure. But the last big version change wasn't something
> that was really planned years in advance (at least not as far as I know).
No planning was involved. It was basically Linus saying "the 2.6 numbers
are too high, lets reset". And it seems that's his plan for going to 4.0
too. But if we can get Linus to say, we will switch to 4.0 after we hit
3.20 or so, then we can make plans for that release to dump old stuff.
More information about the Ksummit-2012-discuss