[Ksummit-2012-discuss] [ATTEND] Being more things to more people

Paul E. McKenney paulmck at linux.vnet.ibm.com
Wed Jun 27 01:28:35 UTC 2012

On Wed, Jun 27, 2012 at 10:40:02AM +1000, NeilBrown wrote:
> On Tue, 26 Jun 2012 12:22:47 -0700 "Paul E. McKenney"
> <paulmck at linux.vnet.ibm.com> wrote:
> > As with Greg KH and Steven Rostedt, I am on the program committee but
> > am nevertheless submitting a request to attend.  So here you go!
> > 
> > Only ten years ago, the common wisdom held that separate OS kernel
> > source bases were required to serve areas such as server, desktop,
> > smartphone, and real-time/embedded.  The Linux kernel has nevertheless
> > done reasonably well in all of these areas.  Especially in smartphones,
> > which would not have been my first guess ten years ago.
> > 
> > So how far can/should we push this?  There does not appear to be a
> > sharp line: You can get increased performance, scalability, real-time
> > response, energy efficiency, and boot speed from the Linux kernel if
> > you are willing to live with appropriate restrictions in HW/SW
> > configuration or in the workload.  The systems that get the highest
> > scalability (thousands of CPUs) and the best real-time response
> > (tens of microseconds) tend to run extremely constrained workloads.
> > 
> > In particular, from my perspective, the Linux community has been using a
> > just-in-time model for improvements in these areas, producing improvements
> > as they are needed.  On the whole, this has served the community well,
> > especially in terms of avoiding overengineered solutions to non-problems.
> > On the other hand, there has been quite a bit of repeated rework to
> > attain each additional increment of improvement.  Would it make sense
> > to take larger steps?  If so, how do we enable the typical kernel
> > hacker to contribute meaningfully despite the fact that most must
> > make do developing and testing on modest systems?  If not, what should
> > we be doing to avoid "work hardening" Linux as we repeatedly rework it?
> I like your picture of "work hardening" Linux - it makes it more brittle.
> If I remember my junior-high metal-work course correctly, the recommended
> way to deal with unwanted work-hardening is "annealing".  So we should stick
> Linux in a big furnace every so-often (remember the 'odd' numbered kernels -
> 2.1, 2.3 ?  No, I'm not suggesting a return to those days).
> I think the closest we have to annealing is modern software engineering is
> called "refactoring".  We learn a lot while developing Linux and sometimes
> there is value in going back and changing things to match the new patterns
> that have been learned.
> Sometimes doing that is accused of being "churn" - which to some extent it
> true.  But not all churn is bad.  A good example is Rafael's:
>    [RFC] Plan to get rid of all legacy PM handling
> (https://lkml.org/lkml/2012/6/17/192).  This looks like good refactoring
> of some work-hardened code.

Indeed, one can over-anneal/refactor just as surely as one can

> So "what should we be doing" is maybe just encouraging good refactoring (as
> Greg KH did - explicitly supporting that PM change).

Good refactoring is wonderful, but avoiding the need for refactoring
seems like it has a place as well.

> On your other point - avoiding repeated re-engineering by taking bigger steps
> - I'm not sure if that is really possible.

The best example I know of for this is Sequent's original move from one
CPU to 30 CPUs in one step.  This was in the early 1980s, before my time
with them, but it really is possible.

>                                             Your point about over-engineering
> solutions for non-problems is very relevant (been there, done that).  I think
> the only way to take the bigger steps you suggest is by applying vision and
> taste, which seem to require equal parts of experience and luck, and those
> are a challenge to teach.

It should also be possible to get a reasonably good idea how the world
will be changing.  From what I can see, we are not yet at the end of
the road for increasing core counts, nor for improved energy efficiency,
nor for use of non-CPU accelerators such as GPUs.

> > And why not also a recent pet peeve?  For me, it is CPU-numbering schemes
> > that map CPUs that are electrically adjacent to wildly different CPU
> > numbers.  ;-)
> Prime numbers on the left, composite numbers on the right?

Nah, you should split between the rationals and the irrationals.
But in a USA election year especially, there is no way am going to
say which is left and which is right.  ;-)

							Thanx, Paul

More information about the Ksummit-2012-discuss mailing list