[Ksummit-2012-discuss] [ATTEND] Memory management

Srivatsa S. Bhat srivatsa.bhat at linux.vnet.ibm.com
Wed Jun 20 06:26:12 UTC 2012


On 06/19/2012 10:46 PM, Johannes Weiner wrote:

> Hello,
> 
> I would like to attend KS2012.
> 
> These days I'm focussed on memory cgroups (performance optimization
> and better integration into the rest of the VM) and page reclaim (in
> memcgs, in the presence of dirty pages, and better adapting to changes
> in the page cache working set and protection of existing cache).
> 
> Prodecurally, I wish there was a separate memory management tree that
> isn't based on -next.  As was apparent before, and established in the
> linux-next thread, quite some half-baked changes make it into -next
> and developping on top of that is very hard.  Currently, I develop,
> test, and benchmark against Linus' releases, and then rebase to -mm
> for smoke tests and submission, which is crap, but at least I can make
> progress.  I expect a requirement for this would be to take something
> else off of Andrew's hands, and if so, I'm not sure how to do that.
> 
> I would like to discuss if there is interest in a framework for
> benchmark result comparison.  Everyone seems to have their own
> (ad-hoc) benchmark suites with custom invocation, evaluation, and
> comparison scripts.  To make both test and evaluation recipes easily
> exchangable, I hacked together some tools that take job spec files
> describing workload and data gathering, and evaluation spec files
> describing how to extract and present items from the collected data,
> to compare separate collection runs (kernel versions) at the push of a
> button.  It also has individual tools that can be stuck together in
> shell pipes to quickly explore, plot etc. a set of data.  It's not
> meant to replace existing suites, but rather to wrap them in job and
> evaluation recipes that are easy to modify, extend, and share.  Maybe
> this can be a foundation for building a common benchmark suite that
> can be transferred and set up quickly, and includes agreed upon
> performance/behaviour metrics so people can do regression testing
> without much effort?  Code for the daring is available here:
> http://git.cmpxchg.org/?p=statutils.git;a=summary
> 


Looks very interesting! For focussed regression testing, we can even
categorize the benchmarks into a set of groups such as cache-sensitive
benchmarks, constant throughput benchmarks, constant time benchmarks and
so on, and then build regression test-suites out of them for specific
subsystems. For example, we can employ the cache-sensitive set of benchmarks
to effectively observe any regressions in the scheduler's task migration
behaviour.
 
Regards,
Srivatsa S. Bhat



More information about the Ksummit-2012-discuss mailing list