[PATCH 2/2] exec: move core_pattern pipe helper into the crashing namespace

Will Drewry wad at chromium.org
Mon Sep 20 13:28:52 PDT 2010


On Mon, Sep 20, 2010 at 1:50 PM, Oleg Nesterov <oleg at redhat.com> wrote:
> On 09/17, Will Drewry wrote:
>>
>> On Fri, Sep 17, 2010 at 8:29 PM, Oleg Nesterov <oleg at redhat.com> wrote:
>> >
>> > This looks overcomplicated to me, or I missed something.
>> >
>> > I do not understand why should we do this beforehand, and why we need
>> > copy_namespaces_unattached().
>> >
>> > Can't you just pass current to umh_pipe_setup() (or another helper) as
>> > the argument? Then this helper can copy ->fs and ->nsproxy itself.
>>
>> I wasn't sure if it was reasonable to pass the current task_struct
>> over, but I certainly can.
>
> Why not? current calls call_usermodehelper_exec(), it can't go away
> until subprocess_info->init() returns, it sleeps on wait_for_completion().

yeah - I wasn't sure because the other coredump_params didn't pass it,
so I assumed there was some history around that.  Though it sounds
like the current approach may not be the way forward anyhow.

>> In practice, this seems to amount to just adding a refcount to all the
>> namespaces and creating a new nsproxy which isn't really needed.  Most
>> likely, doing what you've suggested above plus the copy_fs_struct and
>> the swap out will do the trick.  I'll try it out and see.  That's make
>> it much clearer I think.
>
> Yes, just get_nsproxy() (like fork() does) should be fine in this case.
>
> As for copying ->fs, I am not sure actually. core_pattern is global,
> say it is "|/coredumper". If you change ->root, then exec can fail
> because that binary is not visible to the coredumping process?

Yeah - it's lose-lose I think.  On one hand, it may not run, on the
other hand it may have access where it shouldn't or not have access it
where it needs it.

> Probably we should move core_pattern into ->pid_ns, I dunno.

Sounds like this is worth doing. I'll look into it and post something
for further consideration!
thanks again -
will


More information about the Containers mailing list