[Ksummit-2012-discuss] [ATTEND] Being more things to more people

Paul E. McKenney paulmck at linux.vnet.ibm.com
Tue Jun 26 19:22:47 UTC 2012


As with Greg KH and Steven Rostedt, I am on the program committee but
am nevertheless submitting a request to attend.  So here you go!

Only ten years ago, the common wisdom held that separate OS kernel
source bases were required to serve areas such as server, desktop,
smartphone, and real-time/embedded.  The Linux kernel has nevertheless
done reasonably well in all of these areas.  Especially in smartphones,
which would not have been my first guess ten years ago.

So how far can/should we push this?  There does not appear to be a
sharp line: You can get increased performance, scalability, real-time
response, energy efficiency, and boot speed from the Linux kernel if
you are willing to live with appropriate restrictions in HW/SW
configuration or in the workload.  The systems that get the highest
scalability (thousands of CPUs) and the best real-time response
(tens of microseconds) tend to run extremely constrained workloads.

In particular, from my perspective, the Linux community has been using a
just-in-time model for improvements in these areas, producing improvements
as they are needed.  On the whole, this has served the community well,
especially in terms of avoiding overengineered solutions to non-problems.
On the other hand, there has been quite a bit of repeated rework to
attain each additional increment of improvement.  Would it make sense
to take larger steps?  If so, how do we enable the typical kernel
hacker to contribute meaningfully despite the fact that most must
make do developing and testing on modest systems?  If not, what should
we be doing to avoid "work hardening" Linux as we repeatedly rework it?

And why not also a recent pet peeve?  For me, it is CPU-numbering schemes
that map CPUs that are electrically adjacent to wildly different CPU
numbers.  ;-)

							Thanx, Paul



More information about the Ksummit-2012-discuss mailing list